type of research based on method

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

Types of Research – Explained with Examples

DiscoverPhDs

  • By DiscoverPhDs
  • October 2, 2020

Types of Research Design

Types of Research

Research is about using established methods to investigate a problem or question in detail with the aim of generating new knowledge about it.

It is a vital tool for scientific advancement because it allows researchers to prove or refute hypotheses based on clearly defined parameters, environments and assumptions. Due to this, it enables us to confidently contribute to knowledge as it allows research to be verified and replicated.

Knowing the types of research and what each of them focuses on will allow you to better plan your project, utilises the most appropriate methodologies and techniques and better communicate your findings to other researchers and supervisors.

Classification of Types of Research

There are various types of research that are classified according to their objective, depth of study, analysed data, time required to study the phenomenon and other factors. It’s important to note that a research project will not be limited to one type of research, but will likely use several.

According to its Purpose

Theoretical research.

Theoretical research, also referred to as pure or basic research, focuses on generating knowledge , regardless of its practical application. Here, data collection is used to generate new general concepts for a better understanding of a particular field or to answer a theoretical research question.

Results of this kind are usually oriented towards the formulation of theories and are usually based on documentary analysis, the development of mathematical formulas and the reflection of high-level researchers.

Applied Research

Here, the goal is to find strategies that can be used to address a specific research problem. Applied research draws on theory to generate practical scientific knowledge, and its use is very common in STEM fields such as engineering, computer science and medicine.

This type of research is subdivided into two types:

  • Technological applied research : looks towards improving efficiency in a particular productive sector through the improvement of processes or machinery related to said productive processes.
  • Scientific applied research : has predictive purposes. Through this type of research design, we can measure certain variables to predict behaviours useful to the goods and services sector, such as consumption patterns and viability of commercial projects.

Methodology Research

According to your Depth of Scope

Exploratory research.

Exploratory research is used for the preliminary investigation of a subject that is not yet well understood or sufficiently researched. It serves to establish a frame of reference and a hypothesis from which an in-depth study can be developed that will enable conclusive results to be generated.

Because exploratory research is based on the study of little-studied phenomena, it relies less on theory and more on the collection of data to identify patterns that explain these phenomena.

Descriptive Research

The primary objective of descriptive research is to define the characteristics of a particular phenomenon without necessarily investigating the causes that produce it.

In this type of research, the researcher must take particular care not to intervene in the observed object or phenomenon, as its behaviour may change if an external factor is involved.

Explanatory Research

Explanatory research is the most common type of research method and is responsible for establishing cause-and-effect relationships that allow generalisations to be extended to similar realities. It is closely related to descriptive research, although it provides additional information about the observed object and its interactions with the environment.

Correlational Research

The purpose of this type of scientific research is to identify the relationship between two or more variables. A correlational study aims to determine whether a variable changes, how much the other elements of the observed system change.

According to the Type of Data Used

Qualitative research.

Qualitative methods are often used in the social sciences to collect, compare and interpret information, has a linguistic-semiotic basis and is used in techniques such as discourse analysis, interviews, surveys, records and participant observations.

In order to use statistical methods to validate their results, the observations collected must be evaluated numerically. Qualitative research, however, tends to be subjective, since not all data can be fully controlled. Therefore, this type of research design is better suited to extracting meaning from an event or phenomenon (the ‘why’) than its cause (the ‘how’).

Quantitative Research

Quantitative research study delves into a phenomena through quantitative data collection and using mathematical, statistical and computer-aided tools to measure them . This allows generalised conclusions to be projected over time.

Types of Research Methodology

According to the Degree of Manipulation of Variables

Experimental research.

It is about designing or replicating a phenomenon whose variables are manipulated under strictly controlled conditions in order to identify or discover its effect on another independent variable or object. The phenomenon to be studied is measured through study and control groups, and according to the guidelines of the scientific method.

Non-Experimental Research

Also known as an observational study, it focuses on the analysis of a phenomenon in its natural context. As such, the researcher does not intervene directly, but limits their involvement to measuring the variables required for the study. Due to its observational nature, it is often used in descriptive research.

Quasi-Experimental Research

It controls only some variables of the phenomenon under investigation and is therefore not entirely experimental. In this case, the study and the focus group cannot be randomly selected, but are chosen from existing groups or populations . This is to ensure the collected data is relevant and that the knowledge, perspectives and opinions of the population can be incorporated into the study.

According to the Type of Inference

Deductive investigation.

In this type of research, reality is explained by general laws that point to certain conclusions; conclusions are expected to be part of the premise of the research problem and considered correct if the premise is valid and the inductive method is applied correctly.

Inductive Research

In this type of research, knowledge is generated from an observation to achieve a generalisation. It is based on the collection of specific data to develop new theories.

Hypothetical-Deductive Investigation

It is based on observing reality to make a hypothesis, then use deduction to obtain a conclusion and finally verify or reject it through experience.

Descriptive Research Design

According to the Time in Which it is Carried Out

Longitudinal study (also referred to as diachronic research).

It is the monitoring of the same event, individual or group over a defined period of time. It aims to track changes in a number of variables and see how they evolve over time. It is often used in medical, psychological and social areas .

Cross-Sectional Study (also referred to as Synchronous Research)

Cross-sectional research design is used to observe phenomena, an individual or a group of research subjects at a given time.

According to The Sources of Information

Primary research.

This fundamental research type is defined by the fact that the data is collected directly from the source, that is, it consists of primary, first-hand information.

Secondary research

Unlike primary research, secondary research is developed with information from secondary sources, which are generally based on scientific literature and other documents compiled by another researcher.

Action Research Methods

According to How the Data is Obtained

Documentary (cabinet).

Documentary research, or secondary sources, is based on a systematic review of existing sources of information on a particular subject. This type of scientific research is commonly used when undertaking literature reviews or producing a case study.

Field research study involves the direct collection of information at the location where the observed phenomenon occurs.

From Laboratory

Laboratory research is carried out in a controlled environment in order to isolate a dependent variable and establish its relationship with other variables through scientific methods.

Mixed-Method: Documentary, Field and/or Laboratory

Mixed research methodologies combine results from both secondary (documentary) sources and primary sources through field or laboratory research.

Do you need to have published papers to do a PhD?

Do you need to have published papers to do a PhD? The simple answer is no but it could benefit your application if you can.

Concept Paper

A concept paper is a short document written by a researcher before starting their research project, explaining what the study is about, why it is needed and the methods that will be used.

DiscoverPhDs_Annotated_Bibliography_Literature_Review

Find out the differences between a Literature Review and an Annotated Bibliography, whey they should be used and how to write them.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

type of research based on method

Browse PhDs Now

type of research based on method

You’ve impressed the supervisor with your PhD application, now it’s time to ace your interview with these powerful body language tips.

What is an Appendix Dissertation explained

A thesis and dissertation appendix contains additional information which supports your main arguments. Find out what they should include and how to format them.

type of research based on method

Dr Ilesanmi has a PhD in Applied Biochemistry from the Federal University of Technology Akure, Ondo State, Nigeria. He is now a lecturer in the Department of Biochemistry at the Federal University Otuoke, Bayelsa State, Nigeria.

type of research based on method

Emma is a third year PhD student at the University of Rhode Island. Her research focuses on the physiological and genomic response to climate change stressors.

Join Thousands of Students

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations, authors’ contributions.

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 3, 2023 3:14 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Elsevier QRcode Wechat

  • Research Process

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

type of research based on method

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Prevent plagiarism, run a free check.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples

Pfeiffer Library

Research Methodologies

  • What are research designs?
  • What are research methodologies?

What are research methods?

Quantitative research methods, qualitative research methods, mixed method approach, selecting the best research method.

  • Additional Sources

Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project.  The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data.  The following boxes below contain a list of quantitative, qualitative, and mixed research methods.

  • Closed-ended questionnaires/survey: These types of questionnaires or surveys are like "multiple choice" tests, where participants must select from a list of premade answers.  According to the content of the question, they must select the one that they agree with the most.  This approach is the simplest form of quantitative research because the data is easy to combine and quantify.
  • Structured interviews: These are a common research method in market research because the data can be quantified.  They are strictly designed for little "wiggle room" in the interview process so that the data will not be skewed.  You can conduct structured interviews in-person, online, or over the phone (Dawson, 2019).

Constructing Questionnaires

When constructing your questions for a survey or questionnaire, there are things you can do to ensure that your questions are accurate and easy to understand (Dawson, 2019):

  • Keep the questions brief and simple.
  • Eliminate any potential bias from your questions.  Make sure that they do not word things in a way that favor one perspective over another.
  • If your topic is very sensitive, you may want to ask indirect questions rather than direct ones.  This prevents participants from being intimidated and becoming unwilling to share their true responses.
  • If you are using a closed-ended question, try to offer every possible answer that a participant could give to that question.
  • Do not ask questions that assume something of the participant.  The question "How often do you exercise?" assumes that the participant exercises (when they may not), so you would want to include a question that asks if they exercise at all before asking them how often.
  • Try and keep the questionnaire as short as possible.  The longer a questionnaire takes, the more likely the participant will not complete it or get too tired to put truthful answers.
  • Promise confidentiality to your participants at the beginning of the questionnaire.

Quantitative Research Measures

When you are considering a quantitative approach to your research, you need to identify why types of measures you will use in your study.  This will determine what type of numbers you will be using to collect your data.  There are four levels of measurement:

  • Nominal: These are numbers where the order of the numbers do not matter.  They aim to identify separate information.  One example is collecting zip codes from research participants.  The order of the numbers does not matter, but the series of numbers in each zip code indicate different information (Adamson and Prion, 2013).
  • Ordinal: Also known as rankings because the order of these numbers matter.  This is when items are given a specific rank according to specific criteria.  A common example of ordinal measurements include ranking-based questionnaires, where participants are asked to rank items from least favorite to most favorite.  Another common example is a pain scale, where a patient is asked to rank their pain on a scale from 1 to 10 (Adamson and Prion, 2013).
  • Interval: This is when the data are ordered and the distance between the numbers matters to the researcher (Adamson and Prion, 2013).  The distance between each number is the same.  An example of interval data is test grades.
  • Ratio: This is when the data are ordered and have a consistent distance between numbers, but has a "zero point."  This means that there could be a measurement of zero of whatever you are measuring in your study (Adamson and Prion, 2013).  An example of ratio data is measuring the height of something because the "zero point" remains constant in all measurements.  The height of something could also be zero.

Focus Groups

This is when a select group of people gather to talk about a particular topic.  They can also be called discussion groups or group interviews (Dawson, 2019).  They are usually lead by a moderator  to help guide the discussion and ask certain questions.  It is critical that a moderator allows everyone in the group to get a chance to speak so that no one dominates the discussion.  The data that are gathered from focus groups tend to be thoughts, opinions, and perspectives about an issue.

Advantages of Focus Groups

  • Only requires one meeting to get different types of responses.
  • Less researcher bias due to participants being able to speak openly.
  • Helps participants overcome insecurities or fears about a topic.
  • The researcher can also consider the impact of participant interaction.

Disadvantages of Focus Groups

  • Participants may feel uncomfortable to speak in front of an audience, especially if the topic is sensitive or controversial.
  • Since participation is voluntary, not every participant may contribute equally to the discussion.
  • Participants may impact what others say or think.
  • A researcher may feel intimidated by running a focus group on their own.
  • A researcher may need extra funds/resources to provide a safe space to host the focus group.
  • Because the data is collective, it may be difficult to determine a participant's individual thoughts about the research topic.

Observation

There are two ways to conduct research observations:

  • Direct Observation: The researcher observes a participant in an environment.  The researcher often takes notes or uses technology to gather data, such as a voice recorder or video camera.  The researcher does not interact or interfere with the participants.  This approach is often used in psychology and health studies (Dawson, 2019).
  • Participant Observation:  The researcher interacts directly with the participants to get a better understanding of the research topic.  This is a common research method when trying to understand another culture or community.  It is important to decide if you will conduct a covert (participants do not know they are part of the research) or overt (participants know the researcher is observing them) observation because it can be unethical in some situations (Dawson, 2019).

Open-Ended Questionnaires

These types of questionnaires are the opposite of "multiple choice" questionnaires because the answer boxes are left open for the participant to complete.  This means that participants can write short or extended answers to the questions.  Upon gathering the responses, researchers will often "quantify" the data by organizing the responses into different categories.  This can be time consuming because the researcher needs to read all responses carefully.

Semi-structured Interviews

This is the most common type of interview where researchers aim to get specific information so they can compare it to other interview data.  This requires asking the same questions for each interview, but keeping their responses flexible.  This means including follow-up questions if a subject answers a certain way.  Interview schedules are commonly used to aid the interviewers, which list topics or questions that will be discussed at each interview (Dawson, 2019).

Theoretical Analysis

Often used for nonhuman research, theoretical analysis is a qualitative approach where the researcher applies a theoretical framework to analyze something about their topic.  A theoretical framework gives the researcher a specific "lens" to view the topic and think about it critically. it also serves as context to guide the entire study.  This is a popular research method for analyzing works of literature, films, and other forms of media.  You can implement more than one theoretical framework with this method, as many theories complement one another.

Common theoretical frameworks for qualitative research are (Grant and Osanloo, 2014):

  • Behavioral theory
  • Change theory
  • Cognitive theory
  • Content analysis
  • Cross-sectional analysis
  • Developmental theory
  • Feminist theory
  • Gender theory
  • Marxist theory
  • Queer theory
  • Systems theory
  • Transformational theory

Unstructured Interviews

These are in-depth interviews where the researcher tries to understand an interviewee's perspective on a situation or issue.  They are sometimes called life history interviews.  It is important not to bombard the interviewee with too many questions so they can freely disclose their thoughts (Dawson, 2019).

  • Open-ended and closed-ended questionnaires: This approach means implementing elements of both questionnaire types into your data collection.  Participants may answer some questions with premade answers and write their own answers to other questions.  The advantage to this method is that you benefit from both types of data collection to get a broader understanding of you participants.  However, you must think carefully about how you will analyze this data to arrive at a conclusion.

Other mixed method approaches that incorporate quantitative and qualitative research methods depend heavily on the research topic.  It is strongly recommended that you collaborate with your academic advisor before finalizing a mixed method approach.

How do you determine which research method would be best for your proposal?  This heavily depends on your research objective.  According to Dawson (2019), there are several questions to ask yourself when determining the best research method for your project:

  • Are you good with numbers and mathematics?
  • Would you be interested in conducting interviews with human subjects?
  • Would you enjoy creating a questionnaire for participants to complete?
  • Do you prefer written communication or face-to-face interaction?
  • What skills or experiences do you have that might help you with your research?  Do you have any experiences from past research projects that can help with this one?
  • How much time do you have to complete the research?  Some methods take longer to collect data than others.
  • What is your budget?  Do you have adequate funding to conduct the research in the method you  want?
  • How much data do you need?  Some research topics need only a small amount of data while others may need significantly larger amounts.
  • What is the purpose of your research? This can provide a good indicator as to what research method will be most appropriate.
  • << Previous: What are research methodologies?
  • Next: Additional Sources >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Career Hub - Duke University

  • Undergraduate Students
  • Doctoral Students
  • Master’s Students
  • Engineering Master’s Students
  • Faculty & Staff
  • Parents & Families
  • Asian / Pacific Islander
  • Black/African American
  • First Generation/Low Income
  • Hispanic/Latinx
  • International
  • Native American/Indigenous
  • Neurodiverse
  • Student Athletes
  • Students with Disabilities
  • Undocumented
  • What is a Career Community?
  • Business, Finance & Consulting
  • Data, Technology & Engineering
  • Discovery & Exploration
  • Education, Government, Nonprofit & Policy
  • Energy, Environment & Sustainability
  • Entertainment, Media & Arts
  • Healthcare & Biomedical Sciences
  • Innovation, Entrepreneurship & Design
  • Know Yourself
  • Explore Options
  • Focus & Prepare
  • Take Action
  • Evaluate & Refine
  • Featured Opportunities
  • Career Readiness Resources
  • Personalize Your Hub
  • For Employers

What Is Research? Types and Methods

  • Share This: Share What Is Research? Types and Methods on Facebook Share What Is Research? Types and Methods on LinkedIn Share What Is Research? Types and Methods on X

What Is Research? Types and Methods was originally published on Forage .

What is research? Types and Methods of Research

Research is the process of examining a hypothesis to make discoveries. Practically every career involves research in one form or another. Accountants research their client’s history and financial documents to understand their financial situation, and data scientists perform research to inform data-driven decisions. 

In this guide, we’ll go over: 

Research Definition

Types of research , research methods, careers in research, showing research skills on resumes.

Research is an investigation into a topic or idea to discover new information. There’s no all-encompassing definition for research because it’s an incredibly varied approach to finding discoveries. For example, research can be as simple as seeking to answer a question that already has a known answer, like reading an article to learn why the sky is blue. 

Research can also be much broader, seeking to answer questions that have never before been asked. For instance, a lot of research looks for ways to deepen our collective understanding of social, physical, and biological phenomena. Besides broadening humanity’s knowledge, research is a great tool for businesses and individuals to learn new things.

Why Does Research Matter?

While some research seeks to uncover ground-breaking information on its own, other research forms building blocks that allow for further development. For example, Tony Gilbert of the Masonic Medical Research Institute (MMRI) says that Dr. Gordon K. Moe, a co-founder and director of research at MMRI, led early studies of heart rhythms and arrhythmia.  

Gilbert notes that this research “allowed other scientists and innovators to develop inventions like the pacemaker and defibrillator (AED). So, while Dr. Moe did not invent the pacemaker or the AED, the basic research produced at the MMRI lab helped make these devices possible, and this potentially benefitted millions of people.”

Of course, not every researcher is hunting for medical innovations and cures for diseases. In fact, most companies, regardless of industry or purpose, use research every day.  

“Access to the latest information enables you to make informed decisions to help your business succeed,” says Andrew Pickett, trial attorney at Andrew Pickett Law, PLLC.

Showcase new skills

Build the confidence and practical skills that employers are looking for with Forage virtual work experiences.

Sign up for free

Scientific Research

Scientific research utilizes a systematic approach to test hypotheses. Researchers plan their investigation ahead of time, and peers test findings to ensure the analysis was performed accurately. 

Foundational research in sciences, often referred to as “basic science,” involves much of the research done at medical research organizations. Research done by the MMRI falls into this category, seeking to uncover “new information and insights for scientists and medical researchers around the world.”

Scientific research is a broad term; studies can be lab-based, clinical, quantitative, or qualitative. Studies can also switch between different settings and methods, like translational research. 

“Translational research moves research from lab-settings to the settings in which they will provide direct impact (for example, moving bench science to clinical settings),” says Laren Narapareddy, faculty member and researcher at Emory University.

thermo fisher scientific logo

ThermoFisher Scientific Genetic Studies

Learn how scientists research and analyze diagnostic data with this free job simulation.

Avg. Time: 2-3 hours

Skills you’ll build: Data analysis, research, understanding PCR results, cycle thresholds, limit of detection

Historical Research

Historical research involves studying past events to determine how they’ve affected the course of time, using historical data to explain or anticipate current and future events, and filling in gaps in history. Researchers can look at past socio-political events to hypothesize how similar events could pan out in the future.  

However, historical research can also focus on figuring out what actually happened at a moment in time, like reading diary entries to better understand life in England in the 14th century. 

In many ways, research by data, financial, and marketing analysts can be considered historical because these analysts look at past trends to predict future outcomes and make business decisions. 

User Research

User research is often applied in business and marketing to better understand a customer base. Researchers and analysts utilize surveys, interviews, and feedback channels to evaluate their clients’ and customers’ wants, needs, and motivations. Analysts may also apply user research techniques to see how customers respond to a product’s user experience (UX) design and test the efficacy of marketing campaigns. 

>>MORE: See how user and market research inform marketing decisions with Lululemon’s Omnichannel Marketing Job Simulation .

Market Research

Market research utilizes methods similar to user research but seeks to look at a customer base more broadly. Studies of markets take place at an intersection between economic trends and customer decision-making. 

Market research “allows you to stay up-to-date with industry trends and changes so that you can adjust your business strategies accordingly,” says Pickett. 

A primary goal in market research is finding competitive advantages over other businesses. Analysts working in market research may conduct surveys, focus groups, or historical analysis to predict how a demographic will act (and spend) in the future. 

Other Types of Research

The world of research is constantly expanding. New technologies bring new ways to ask and answer unique questions, creating the need for different types of research. Additionally, certain studies or questions may not be easily answered by one kind of research alone, and researchers can approach hypotheses from a variety of directions. So, more niche types of research seek to solve some of the more complex questions. 

For instance, “multidisciplinary research brings experts in different disciplines together to ask and answer questions at the intersection of their fields,” says Narapareddy.

Research doesn’t happen in a bubble, though. To foster better communication between researchers and the public, types of research exist that bring together both scientists and non-scientists. 

“Community-based participatory research is a really important and equitable model of research that involves partnerships among researchers, communities and organizations at all stages of the research process,” says Narapareddy.

working at Accenture

Accenture Client Research and Problem Identification

Explore how consultants use research to help their clients achieve goals with Accenture’s free job simulation.

Avg. Time: 4-5 hours

Skills you’ll build: Planning, client communication, interviewing, stakeholder analysis, visualization, presentations

Regardless of the type of research or the study’s primary goal, researchers usually use quantitative or qualitative methods. 

Qualitative Methods

Qualitative research focuses on descriptive information, such as people’s beliefs and emotional responses. Researchers often use focus groups, interviews, and surveys to gather qualitative data. 

This approach to research is popular in sociology, political science, psychology, anthropology, and software engineering . For instance, determining how a user feels about a website’s look isn’t easily put into numbers (quantitative data). So, when testing UX designs, software engineers rely on qualitative research. 

Quantitative Methods

Quantitative research methods focus on numerical data like statistics, units of time, or percentages. Researchers use quantitative methods to determine concrete things, like how many customers purchased a product. Analysts and researchers gather quantitative data using surveys, censuses, A/B tests, and random data sampling. 

Practically every industry or field uses quantitative methods. For example, a car manufacturer testing the effectiveness of new airbag technology looks for quantitative data on how often the airbags deploy properly. Additionally, marketing analysts look for increased sales numbers to see if a marketing campaign was successful. 

Mixed-Methods

Answering a question or testing a hypothesis may require a mixture of qualitative and quantitative methods. To see if your customers like your website, for instance, you’ll likely apply qualitative methods, like asking them how they feel about the site’s look and visual appeal, and quantitative methods, like seeing how many customers use the website daily. Research that involves qualitative and quantitative methods is called mixed-method research. 

Researching ideas and hypotheses is a common task in many different careers. For example, working in sales requires understanding quantitative research methods to determine if certain actions improve sales numbers. Some research-intensive career paths include:

  • Data science
  • Investment banking
  • Product management
  • Civil rights law
  • Actuarial science  

Working in Research

Once you have the fundamentals of researching down, the subject matter may evolve or change over the course of your career. 

“My first research experience was assessing fall risk in firefighters — and I now use multi-omic methods [a type of molecular cell analysis] to understand fertility and reproductive health outcomes in women,” notes Narapareddy.

For those considering a career in research, it’s important to “take the time to explore different research methods and techniques to gain a better understanding of what works best for them,” says Pickett. 

Remember that research is exploratory by nature, so don’t be afraid to fail. 

“The work of scientists who came before us helps guide the path for future research, including both their hits and misses,” says Gilbert.

Wilson Sonsini Litigation

Wilson Sonsini Litigation

Experience how attorneys leverage research and analysis to win cases with this free job simulation.

Avg. Time: 3 hours

Skills you’ll build: Legal research, legal analysis, memo writing, investigation, communication

You can show off your research skills on your resume by listing specific research methods in your skills section. You can also call out specific instances you used research skills, and the impact your research had, in the description of past job or internship experiences. For example, you could talk about a time you researched competitors’ marketing strategies and used your findings to suggest a new campaign. 

Your cover letter is another great place to discuss your experience with research. Here, you can talk about large-scale research projects you completed during school or at previous jobs and explain how your research skills would help you in the job you’re applying for. If you have experience collecting and collating data from research surveys during college, for instance, that can translate into data analysis and organizational skills. 

Grow your skills and get job-ready with Forage’s free job simulations . 

Image credit: Canva

The post What Is Research? Types and Methods appeared first on Forage .

Get science-backed answers as you write with Paperpal's Research feature

What is Research Methodology? Definition, Types, and Examples

type of research based on method

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Why is research methodology important?

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, what is academic writing: tips for students, what is hedging in academic writing  , how to use ai to enhance your college..., how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content, word choice problems: how to use the right..., how to avoid plagiarism when using generative ai..., what are journal guidelines on using generative ai....

helpful professor logo

15 Types of Research Methods

types of research methods, explained below

Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020).

Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses, which we can summarize as:

  • Quantitative research can achieve generalizability through scrupulous statistical analysis applied to large sample sizes.
  • Qualitative research achieves deep, detailed, and nuance accounts of specific case studies, which are not generalizable.

Some researchers, with the aim of making the most of both quantitative and qualitative research, employ mixed methods, whereby they will apply both types of research methods in the one study, such as by conducting a statistical survey alongside in-depth interviews to add context to the quantitative findings.

Below, I’ll outline 15 common research methods, and include pros, cons, and examples of each .

Types of Research Methods

Research methods can be broadly categorized into two types: quantitative and qualitative.

  • Quantitative methods involve systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques, providing an in-depth understanding of a specific concept or phenomenon (Schweigert, 2021). The strengths of this approach include its ability to produce reliable results that can be generalized to a larger population, although it can lack depth and detail.
  • Qualitative methods encompass techniques that are designed to provide a deep understanding of a complex issue, often in a specific context, through collection of non-numerical data (Tracy, 2019). This approach often provides rich, detailed insights but can be time-consuming and its findings may not be generalizable.

These can be further broken down into a range of specific research methods and designs:

Combining the two methods above, mixed methods research mixes elements of both qualitative and quantitative research methods, providing a comprehensive understanding of the research problem . We can further break these down into:

  • Sequential Explanatory Design (QUAN→QUAL): This methodology involves conducting quantitative analysis first, then supplementing it with a qualitative study.
  • Sequential Exploratory Design (QUAL→QUAN): This methodology goes in the other direction, starting with qualitative analysis and ending with quantitative analysis.

Let’s explore some methods and designs from both quantitative and qualitative traditions, starting with qualitative research methods.

Qualitative Research Methods

Qualitative research methods allow for the exploration of phenomena in their natural settings, providing detailed, descriptive responses and insights into individuals’ experiences and perceptions (Howitt, 2019).

These methods are useful when a detailed understanding of a phenomenon is sought.

1. Ethnographic Research

Ethnographic research emerged out of anthropological research, where anthropologists would enter into a setting for a sustained period of time, getting to know a cultural group and taking detailed observations.

Ethnographers would sometimes even act as participants in the group or culture, which many scholars argue is a weakness because it is a step away from achieving objectivity (Stokes & Wall, 2017).

In fact, at its most extreme version, ethnographers even conduct research on themselves, in a fascinating methodology call autoethnography .

The purpose is to understand the culture, social structure, and the behaviors of the group under study. It is often useful when researchers seek to understand shared cultural meanings and practices in their natural settings.

However, it can be time-consuming and may reflect researcher biases due to the immersion approach.

Example of Ethnography

Liquidated: An Ethnography of Wall Street  by Karen Ho involves an anthropologist who embeds herself with Wall Street firms to study the culture of Wall Street bankers and how this culture affects the broader economy and world.

2. Phenomenological Research

Phenomenological research is a qualitative method focused on the study of individual experiences from the participant’s perspective (Tracy, 2019).

It focuses specifically on people’s experiences in relation to a specific social phenomenon ( see here for examples of social phenomena ).

This method is valuable when the goal is to understand how individuals perceive, experience, and make meaning of particular phenomena. However, because it is subjective and dependent on participants’ self-reports, findings may not be generalizable, and are highly reliant on self-reported ‘thoughts and feelings’.

Example of Phenomenological Research

A phenomenological approach to experiences with technology  by Sebnem Cilesiz represents a good starting-point for formulating a phenomenological study. With its focus on the ‘essence of experience’, this piece presents methodological, reliability, validity, and data analysis techniques that phenomenologists use to explain how people experience technology in their everyday lives.

3. Historical Research

Historical research is a qualitative method involving the examination of past events to draw conclusions about the present or make predictions about the future (Stokes & Wall, 2017).

As you might expect, it’s common in the research branches of history departments in universities.

This approach is useful in studies that seek to understand the past to interpret present events or trends. However, it relies heavily on the availability and reliability of source materials, which may be limited.

Common data sources include cultural artifacts from both material and non-material culture , which are then examined, compared, contrasted, and contextualized to test hypotheses and generate theories.

Example of Historical Research

A historical research example might be a study examining the evolution of gender roles over the last century. This research might involve the analysis of historical newspapers, advertisements, letters, and company documents, as well as sociocultural contexts.

4. Content Analysis

Content analysis is a research method that involves systematic and objective coding and interpreting of text or media to identify patterns, themes, ideologies, or biases (Schweigert, 2021).

A content analysis is useful in analyzing communication patterns, helping to reveal how texts such as newspapers, movies, films, political speeches, and other types of ‘content’ contain narratives and biases.

However, interpretations can be very subjective, which often requires scholars to engage in practices such as cross-comparing their coding with peers or external researchers.

Content analysis can be further broken down in to other specific methodologies such as semiotic analysis, multimodal analysis , and discourse analysis .

Example of Content Analysis

How is Islam Portrayed in Western Media?  by Poorebrahim and Zarei (2013) employs a type of content analysis called critical discourse analysis (common in poststructuralist and critical theory research ). This study by Poorebrahum and Zarei combs through a corpus of western media texts to explore the language forms that are used in relation to Islam and Muslims, finding that they are overly stereotyped, which may represent anti-Islam bias or failure to understand the Islamic world.

5. Grounded Theory Research

Grounded theory involves developing a theory  during and after  data collection rather than beforehand.

This is in contrast to most academic research studies, which start with a hypothesis or theory and then testing of it through a study, where we might have a null hypothesis (disproving the theory) and an alternative hypothesis (supporting the theory).

Grounded Theory is useful because it keeps an open mind to what the data might reveal out of the research. It can be time-consuming and requires rigorous data analysis (Tracy, 2019).

Grounded Theory Example

Developing a Leadership Identity   by Komives et al (2005) employs a grounded theory approach to develop a thesis based on the data rather than testing a hypothesis. The researchers studied the leadership identity of 13 college students taking on leadership roles. Based on their interviews, the researchers theorized that the students’ leadership identities shifted from a hierarchical view of leadership to one that embraced leadership as a collaborative concept.

6. Action Research

Action research is an approach which aims to solve real-world problems and bring about change within a setting. The study is designed to solve a specific problem – or in other words, to take action (Patten, 2017).

This approach can involve mixed methods, but is generally qualitative because it usually involves the study of a specific case study wherein the researcher works, e.g. a teacher studying their own classroom practice to seek ways they can improve.

Action research is very common in fields like education and nursing where practitioners identify areas for improvement then implement a study in order to find paths forward.

Action Research Example

Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing   by Ellison and Drew was a research study one of my research students completed in his own classroom under my supervision. He implemented a digital game-based approach to literacy teaching with boys and interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience.

7. Natural Observational Research

Observational research can also be quantitative (see: experimental research), but in naturalistic settings for the social sciences, researchers tend to employ qualitative data collection methods like interviews and field notes to observe people in their day-to-day environments.

This approach involves the observation and detailed recording of behaviors in their natural settings (Howitt, 2019). It can provide rich, in-depth information, but the researcher’s presence might influence behavior.

While observational research has some overlaps with ethnography (especially in regard to data collection techniques), it tends not to be as sustained as ethnography, e.g. a researcher might do 5 observations, every second Monday, as opposed to being embedded in an environment.

Observational Research Example

A researcher might use qualitative observational research to study the behaviors and interactions of children at a playground. The researcher would document the behaviors observed, such as the types of games played, levels of cooperation , and instances of conflict.

8. Case Study Research

Case study research is a qualitative method that involves a deep and thorough investigation of a single individual, group, or event in order to explore facets of that phenomenon that cannot be captured using other methods (Stokes & Wall, 2017).

Case study research is especially valuable in providing contextualized insights into specific issues, facilitating the application of abstract theories to real-world situations (Patten, 2017).

However, findings from a case study may not be generalizable due to the specific context and the limited number of cases studied (Walliman, 2021).

See More: Case Study Advantages and Disadvantages

Example of a Case Study

Scholars conduct a detailed exploration of the implementation of a new teaching method within a classroom setting. The study focuses on how the teacher and students adapt to the new method, the challenges encountered, and the outcomes on student performance and engagement. While the study provides specific and detailed insights of the teaching method in that classroom, it cannot be generalized to other classrooms, as statistical significance has not been established through this qualitative approach.

Quantitative Research Methods

Quantitative research methods involve the systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques (Pajo, 2022). The focus is on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

9. Experimental Research

Experimental research is a quantitative method where researchers manipulate one variable to determine its effect on another (Walliman, 2021).

This is common, for example, in high-school science labs, where students are asked to introduce a variable into a setting in order to examine its effect.

This type of research is useful in situations where researchers want to determine causal relationships between variables. However, experimental conditions may not reflect real-world conditions.

Example of Experimental Research

A researcher may conduct an experiment to determine the effects of a new educational approach on student learning outcomes. Students would be randomly assigned to either the control group (traditional teaching method) or the experimental group (new educational approach).

10. Surveys and Questionnaires

Surveys and questionnaires are quantitative methods that involve asking research participants structured and predefined questions to collect data about their attitudes, beliefs, behaviors, or characteristics (Patten, 2017).

Surveys are beneficial for collecting data from large samples, but they depend heavily on the honesty and accuracy of respondents.

They tend to be seen as more authoritative than their qualitative counterparts, semi-structured interviews, because the data is quantifiable (e.g. a questionnaire where information is presented on a scale from 1 to 10 can allow researchers to determine and compare statistical means, averages, and variations across sub-populations in the study).

Example of a Survey Study

A company might use a survey to gather data about employee job satisfaction across its offices worldwide. Employees would be asked to rate various aspects of their job satisfaction on a Likert scale. While this method provides a broad overview, it may lack the depth of understanding possible with other methods (Stokes & Wall, 2017).

11. Longitudinal Studies

Longitudinal studies involve repeated observations of the same variables over extended periods (Howitt, 2019). These studies are valuable for tracking development and change but can be costly and time-consuming.

With multiple data points collected over extended periods, it’s possible to examine continuous changes within things like population dynamics or consumer behavior. This makes a detailed analysis of change possible.

a visual representation of a longitudinal study demonstrating that data is collected over time on one sample so researchers can examine how variables change over time

Perhaps the most relatable example of a longitudinal study is a national census, which is taken on the same day every few years, to gather comparative demographic data that can show how a nation is changing over time.

While longitudinal studies are commonly quantitative, there are also instances of qualitative ones as well, such as the famous 7 Up study from the UK, which studies 14 individuals every 7 years to explore their development over their lives.

Example of a Longitudinal Study

A national census, taken every few years, uses surveys to develop longitudinal data, which is then compared and analyzed to present accurate trends over time. Trends a census can reveal include changes in religiosity, values and attitudes on social issues, and much more.

12. Cross-Sectional Studies

Cross-sectional studies are a quantitative research method that involves analyzing data from a population at a specific point in time (Patten, 2017). They provide a snapshot of a situation but cannot determine causality.

This design is used to measure and compare the prevalence of certain characteristics or outcomes in different groups within the sampled population.

A visual representation of a cross-sectional group of people, demonstrating that the data is collected at a single point in time and you can compare groups within the sample

The major advantage of cross-sectional design is its ability to measure a wide range of variables simultaneously without needing to follow up with participants over time.

However, cross-sectional studies do have limitations . This design can only show if there are associations or correlations between different variables, but cannot prove cause and effect relationships, temporal sequence, changes, and trends over time.

Example of a Cross-Sectional Study

Our longitudinal study example of a national census also happens to contain cross-sectional design. One census is cross-sectional, displaying only data from one point in time. But when a census is taken once every few years, it becomes longitudinal, and so long as the data collection technique remains unchanged, identification of changes will be achievable, adding another time dimension on top of a basic cross-sectional study.

13. Correlational Research

Correlational research is a quantitative method that seeks to determine if and to what degree a relationship exists between two or more quantifiable variables (Schweigert, 2021).

This approach provides a fast and easy way to make initial hypotheses based on either positive or  negative correlation trends  that can be observed within dataset.

While correlational research can reveal relationships between variables, it cannot establish causality.

Methods used for data analysis may include statistical correlations such as Pearson’s or Spearman’s.

Example of Correlational Research

A team of researchers is interested in studying the relationship between the amount of time students spend studying and their academic performance. They gather data from a high school, measuring the number of hours each student studies per week and their grade point averages (GPAs) at the end of the semester. Upon analyzing the data, they find a positive correlation, suggesting that students who spend more time studying tend to have higher GPAs.

14. Quasi-Experimental Design Research

Quasi-experimental design research is a quantitative research method that is similar to experimental design but lacks the element of random assignment to treatment or control.

Instead, quasi-experimental designs typically rely on certain other methods to control for extraneous variables.

The term ‘quasi-experimental’ implies that the experiment resembles a true experiment, but it is not exactly the same because it doesn’t meet all the criteria for a ‘true’ experiment, specifically in terms of control and random assignment.

Quasi-experimental design is useful when researchers want to study a causal hypothesis or relationship, but practical or ethical considerations prevent them from manipulating variables and randomly assigning participants to conditions.

Example of Quasi-Experimental Design

A researcher wants to study the impact of a new math tutoring program on student performance. However, ethical and practical constraints prevent random assignment to the “tutoring” and “no tutoring” groups. Instead, the researcher compares students who chose to receive tutoring (experimental group) to similar students who did not choose to receive tutoring (control group), controlling for other variables like grade level and previous math performance.

Related: Examples and Types of Random Assignment in Research

15. Meta-Analysis Research

Meta-analysis statistically combines the results of multiple studies on a specific topic to yield a more precise estimate of the effect size. It’s the gold standard of secondary research .

Meta-analysis is particularly useful when there are numerous studies on a topic, and there is a need to integrate the findings to draw more reliable conclusions.

Some meta-analyses can identify flaws or gaps in a corpus of research, when can be highly influential in academic research, despite lack of primary data collection.

However, they tend only to be feasible when there is a sizable corpus of high-quality and reliable studies into a phenomenon.

Example of a Meta-Analysis

The power of feedback revisited (Wisniewski, Zierer & Hattie, 2020) is a meta-analysis that examines 435 empirical studies research on the effects of feedback on student learning. They use a random-effects model to ascertain whether there is a clear effect size across the literature. The authors find that feedback tends to impact cognitive and motor skill outcomes but has less of an effect on motivational and behavioral outcomes.

Choosing a research method requires a lot of consideration regarding what you want to achieve, your research paradigm, and the methodology that is most valuable for what you are studying. There are multiple types of research methods, many of which I haven’t been able to present here. Generally, it’s recommended that you work with an experienced researcher or research supervisor to identify a suitable research method for your study at hand.

Hammond, M., & Wellington, J. (2020). Research methods: The key concepts . New York: Routledge.

Howitt, D. (2019). Introduction to qualitative research methods in psychology . London: Pearson UK.

Pajo, B. (2022). Introduction to research methods: A hands-on approach . New York: Sage Publications.

Patten, M. L. (2017). Understanding research methods: An overview of the essentials . New York: Sage

Schweigert, W. A. (2021). Research methods in psychology: A handbook . Los Angeles: Waveland Press.

Stokes, P., & Wall, T. (2017). Research methods . New York: Bloomsbury Publishing.

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact . London: John Wiley & Sons.

Walliman, N. (2021). Research methods: The basics. London: Routledge.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Trending Blogs
  • Geeksforgeeks NEWS
  • Geeksforgeeks Blogs
  • Tips & Tricks
  • Website & Apps
  • ChatGPT Blogs
  • ChatGPT News
  • ChatGPT Tutorial
  • 50+ Best Knock Knock Jokes in 2024 [Funniest]
  • Top 10 Highest Paid Actors In The World: From Dwayne Johnson to Akshay Kumar (Complete List)
  • Top 10 Safest Countries in the World [Updated 2024]
  • List of Districts of India State-Wise [2024 Updated]
  • Top 10 Oldest Languages In The World 2024
  • 100+ Shower Thoughts (Deep, Random and Mind-Blowing Thoughts)
  • 50+ Flower Names for Girls (Popular and Classic Flower Names)
  • How to Say Sorry in Spanish - Top 10 Ways in 2024
  • mg to mL Converter - Free Online Tool
  • 100+ Best Group Chat Names (Clever, Funny, School Friends Names)
  • What is a Resident Virus? Examples and Protection
  • Specific Heat Calculator - Free Online Calculator
  • What is Speech Recognition?
  • 100+ Hyperbole Example: Understand Hyperbole with Examples
  • 100+ Group Names for Girls 2024 (Chat Names for Every Occasion)
  • Top 10 Candlestick Patterns For Traders (Most Powerful Candlestick Patterns to Trade)
  • What is a Boot Sector Virus? (Definition, Risks and Prevention)
  • Dragon Names (Ice, Fire and Strong Dragon Names)
  • 2024 Grammy Award nominations [See the full list here]

Types of Research – Methods Explained with Examples

In the ever-evolving world of academia and professional inquiry, understanding the various types of research is crucial for anyone looking to delve into a new study or project. Research, a systematic investigation aimed at discovering and interpreting facts, plays a pivotal role in expanding our knowledge across various fields.

From qualitative research , known for its in-depth analysis of non-numerical data, to quantitative research , which focuses on numerical data and statistical approaches, the spectrum of research types is broad and diverse. We also explore descriptive research , which aims to accurately and systematically describe a population, situation, or phenomenon, and analytical research, which goes a step further to understand the ‘why’ and ‘how’ of a subject.

What is Research?

Research is the process of studying a subject in detail to discover new information or understand it better. This can be anything from studying plants or animals, to learning how people think and behave, to finding new ways to cure diseases. People do research by asking questions, collecting information, and then looking at that information to find answers or learn new things.

Types of Researches Glance

This table provides a quick reference to understand the key aspects of each research type.

Types of Researches Methodology

1. qualitative.

Qualitative research is a methodological approach primarily used in fields like social sciences, anthropology, and psychology. It’s aimed at understanding human behavior and the motivations behind it. Unlike quantitative research that focuses on numbers and statistics, qualitative research delves into the nature of phenomena through detailed, in-depth exploration. Here’s a more detailed explanation:

Definition and Approach: Qualitative research focuses on understanding human behavior and the reasons that govern such behavior. It involves in-depth analysis of non-numerical data like texts, videos, or audio recordings.

Key Features:

  • Emphasis on exploring complex phenomena
  • Involves interviews, focus groups , and observations
  • Generates rich, detailed data that are often subjective

Applications: Widely used in social sciences, marketing, and user experience research.

2. Quantitative Research

Quantitative research is a systematic approach used in various scientific fields to quantify data and generalize findings from a sample to a larger population. This type of research is fundamentally different from qualitative research in several ways:

Definition and Approach: Quantitative research is centered around quantifying data and generalizing results from a sample to the population of interest. It involves statistical analysis and numerical data .

  • Relies on structured data collection instruments
  • Large sample sizes for generalizability
  • Statistical methods to establish relationships between variables

Applications: Common in natural sciences, economics, and market research.

3. Descriptive Research

Definition and Approach: This research type aims to accurately describe characteristics of a particular phenomenon or population.

  • Provides detailed insights without explaining why or how something happens
  • Involves surveys and observations
  • Often used as a preliminary research method

Applications: Used in demographic studies, census, and organizational reporting.

4. Analytical Research

Definition and Approach: Analytical research goes beyond description to understand the underlying reasons or causes.

  • Involves comparing data and facts to make evaluations
  • Critical thinking is a key component
  • Often hypothesis-driven

Applications: Useful in scientific research, policy analysis, and business strategy.

5. Applied Research

Definition and Approach: Applied research focuses on finding solutions to practical problems.

  • Direct practical application
  • Often collaborative , involving stakeholders
  • Results are immediately applicable

Applications: Used in healthcare, engineering, and technology development.

6. Fundamental Research

Definition and Approach: Also known as basic or pure research, it aims to expand knowledge without a direct application in mind.

  • Theoretical framework
  • Focus on understanding fundamental principles
  • Long-term in nature

Applications: Foundational in fields like physics, mathematics, and social sciences.

7. Exploratory Research

Definition and Approach: This type of research is conducted for a problem that has not been clearly defined.

  • Flexible and unstructured
  • Used to identify potential hypotheses
  • Relies on secondary research like reviewing available literature

Applications: Often the first step in social science research and product development.

8. Conclusive Research

Definition and Approach: Conclusive research is designed to provide information that is useful in decision-making.

  • Structured and methodical
  • Aims to test hypotheses
  • Involves experiments, surveys, and testing

Applications: Used in market research, clinical trials, and policy evaluations.

Difference between Qualitative And Quantitative Research

Here is detailed difference between the qualitative and quantitative research –

Understanding the different types of research is crucial for anyone embarking on a research project. Each type has its unique approach, methodology, and application area, making it essential to choose the right type for your specific research question or problem. This guide serves as a starting point for researchers to explore and select the most suitable research method for their needs, ensuring effective and reliable outcomes.

Types of Research – FAQs

What are the 4 main types of research.

There are four main types of Quantitative research:  Descriptive, Correlational, Causal-Comparative/Quasi-Experimental, and Experimental Research . attempts to establish cause- effect relationships among the variables. These types of design are very similar to true experiments, but with some key differences.

What are the types of research PDF?

APPLIED RESEARCH BASIC RESEARCH CORRELATIONAL RESEARCH DESCRIPTIVE RESEARCH ETHNOGRAPHIC RESEARCH EXPERIMENTAL RESEARCH. EXPLORATORY RESEARCH

What are the 5 main purpose of research?

The primary purposes of basic research (as opposed to applied research) are  documentation, discovery, interpretation, and the research and development (R&D) of methods and systems for the advancement of human knowledge .

Can I be sure that my assignment paper will be plagiarism-free?

You can be 100% sure about the content of your assignment when done by a professional and genuine assignment maker. Most companies focus on providing the best and unique content so that they can attract more returning customers.

Please Login to comment...

Similar reads.

  • SSC/Banking

advertisewithusBannerImg

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 April 2024

A method for identifying different types of university research teams

  • Zhe Cheng   ORCID: orcid.org/0009-0002-5120-6124 1 ,
  • Yihuan Zou 1 &
  • Yueyang Zheng   ORCID: orcid.org/0000-0001-7751-2619 2  

Humanities and Social Sciences Communications volume  11 , Article number:  523 ( 2024 ) Cite this article

410 Accesses

Metrics details

Identifying research teams constitutes a fundamental step in team science research, and universities harbor diverse types of such teams. This study introduces a method and proposes algorithms for team identification, encompassing the project-based research team (Pbrt), the individual-based research team (Ibrt), the backbone-based research group (Bbrg), and the representative research group (Rrg), scrutinizing aspects such as project, contribution, collaboration, and similarity. Drawing on two top universities in Materials Science and Engineering as case studies, this research reveals that university research teams predominantly manifest as backbone-based research groups. The distribution of members within these groups adheres to Price’s Law, indicating a concentration of research funding among a minority of research groups. Furthermore, the representative research groups in universities exhibit interdisciplinary characteristics. Notably, significant differences exist in collaboration mode and member structures among high-level backbone-based research groups across diverse cultural backgrounds.

Similar content being viewed by others

type of research based on method

Towards understanding the characteristics of successful and unsuccessful collaborations: a case-based team science study

Hannah B. Love, Bailey K. Fosdick, … Ellen R. Fisher

type of research based on method

Interpersonal relationships drive successful team science: an exemplary case-based study

Hannah B. Love, Jennifer E. Cross, … Ellen R. Fisher

type of research based on method

Mixing Patterns in Interdisciplinary Co-Authorship Networks at Multiple Scales

Shihui Feng & Alec Kirkley

Introduction

Team science has emerged as a burgeoning field of inquiry, attracting the attention of numerous scholars (e.g., Stokols et al., 2008 ; Bozeman & Youtie, 2018 ; Coles et al., 2022 ; Deng et al., 2022 ; Forscher et al., 2023 ), who endeavor to explore and try to summarize strategies for fostering effective research teams. Conducting team science research would help improve team efficacy. The National Institutes of Health in the USA pointed out that team science is a new interdisciplinary field that empirically examines the processes by which scientific teams, research centers, and institutes, both large and small, are structured (National Research Council, 2015 ). In accordance with this conceptualization, research teams can be delineated into various types based on their size and organizational form. Existing research also takes diverse teams as focal points when probing issues such as team construction and team performance. For example, Wu et al. ( 2019 ) and Abramo et al. ( 2017 ) regard the co-authors of a single paper as a team, discussing issues of research team innovation and benefits. Meanwhile, Zhao et al. ( 2014 ) and Lungeanu et al. ( 2014 ) consider the project members as a research team, exploring issues such as internal interest distribution and team performance. Boardman and Ponomariov ( 2014 ), Lee et al. ( 2008 ), and Okamoto and Centers for Population Health and Health Disparities Evaluation Working Group ( 2015 ) view the university’s research center as a research group, investigating themes about member collaboration, management, and knowledge management portals.

Regarding the definition of research teams, some researchers believe that a research team is a collection of people who work together to achieve a common goal and discover new phenomena through research by sharing information, resources, and professional expertise (Liu et al., 2020 ). Conversely, others argue that groups operating across distinct temporal and spatial contexts, such as virtual teams, do not meet the criteria for teams, as they engage solely in collaborative activities between teams. According to this perspective, Research teams should be individuals collaborating over an extended period (typically exceeding six months) (Barjak & Robinson, 2008 ). Contemporary discourse on team science tends to embrace a broad conceptualization wherein research teams include both small-scale teams comprising 2–10 individuals and larger groups consisting of more than 10 members (National Research Council, 2015 ). These research teams are typically formed to conduct a project or finish research papers, while research groups are formed to solve complex problems, drawing members from diverse departments or geographical locations.

Obviously, different research inquiries are linked to different types of research teams. Micro-level investigations, such as those probing the impact of international collaboration on citations, often regard co-authors of research papers as research teams. Conversely, meso-level inquiries, including those exploring factors impacting team organization and management, often view center-based researchers as research groups. Although various approaches can be adopted to identify research teams, such as retrieving names from research centers’ websites or obtaining lists of project-funded members, when the study involves a large sample size and requires more data to measure the performance of research teams, it becomes necessary to use bibliometric methods for team identification.

Existing literature on team identification uses social network analysis (Zhang et al., 2019 ), cohesive subgroup (Dino et al., 2020 ), faction algorithm (Imran et al., 2018 ), FP algorithm (Liao, 2018 ), etc. However, these identification methods often target a singular type of research team or fail to categorize the identified research teams. Moreover, existing studies mostly explore the evolution of specific disciplines (Wang et al., 2017 ), with limited attention devoted to identifying university research teams and the influencing factors of team effectiveness. Therefore, this study tries to develop algorithms to identify diverse university research teams, drawing insights from two universities characterized by different cultural backgrounds. It aims to address two research questions:

How can we identify different types of university research teams?

What are the characteristics of research groups within universities?

Literature review

Why is it necessary to identify research teams? The research focuses on scientific research teams, mostly first identifying the members of research teams through their names on the list of funding projects or institutions’ websites and then conducting research through questionnaires or interviews. However, this methodology may compromise research validity for several reasons. Firstly, the mere inclusion of individuals on funding project lists does not guarantee genuine research team membership or substantive collaboration among members. Secondly, the institutional website generally announces important research team members, potentially overlooking auxiliary personnel or important members from external institutions. Thirdly, reliance solely on lists of research team members fails to capture nuanced information about the team, such as their research ability or communication intensity, thus hindering the exploration of team science-related issues.

Consequently, researchers have turned to co-authorship and citation to identify research teams using established software tools and customized algorithms. For example, Li and Tan ( 2012 ) applied UCINET and social network analysis to identify university research teams, while Hu et al. ( 2019 ) used Citespace to analyze research communities of four disciplines in China, the UK, and the US. Similarly, some researchers also identify the members and leaders of research teams by using and optimizing existing algorithms. For example, Liao ( 2018 ) applied the Fast-Unfolding algorithm to identify research teams in the field of solar cells, while Yu et al. ( 2020 ) and Li et al. ( 2017 ) employed the Louvain community discovery algorithm to identify research teams in artificial intelligence. Lv et al. ( 2016 ) applied the FP-GROWTH algorithm to identify core R&D teams. Yu et al. ( 2018 ) used the faction algorithm to identify research teams in intelligence. Dino et al. ( 2020 ) developed the CL-leader algorithm to confirm research teams and their leaders. Boyack and Klavans ( 2014 ) regard researchers engaged in the same research topic as research teams based on citation information. Notably, these community detection algorithms complement each other, offering versatile tools for identifying research teams.

Despite the utility of these identification methods, they are not without limitations. For example, fixed software algorithms are constrained by predefined rules, posing challenges for researchers seeking to customize identification criteria. Moreover, for developed algorithms, although algorithms based on computer programming languages have high accuracy, they overemphasize the connection relationship between members and do not consider the definition of research teams. In addition, research based on co-authorship networks and community identification algorithms faces inherent problems: (1) Ensuring temporal consistency in co-authorship networks is challenging due to variations in publication timelines, potentially undermining the temporal alignment of team member collaborations; (2) The lack of stability in team identification result means that different identification standards would produce different outcomes; (3) Team members only belong to one research team, but in the actual process, researchers often participate in multiple research teams with different identities, or the same members conduct research in different team combinations.

In summary, research teams in a specific field can be identified using co-authorship information, designing or introducing identification algorithms. However, achieving more accurate identification necessitates consideration of the nuanced definition of research teams. Therefore, this study focuses on university research teams, addressing temporal and spatial collaboration issues among team members by incorporating project information and first-author information. Furthermore, it tackles the issue of classifying research team members by introducing Price’s Law and Everett’s Rule. Additionally, it tackles the issue of team members’ multiple affiliations through the Jaccard Similarity Coefficient and the Louvain Algorithm. Ultimately, this study aims to achieve the classification recognition of university research teams.

Team identification method

An effective team identification method requires both consideration of the definition of research teams and the ability to transform this definition into operable programming languages. University research teams, by definition, comprise researchers collaborating towards a shared objective. As a typical form of the output of a research team, the co-authorship of a scientific research paper implies information exchange and interaction among team members. Thus, this study uses co-authorship relationships within papers to reflect the collaborative relationships among research team members. In this section, novel algorithms for identifying research teams are proposed to address deficiencies observed in prior research.

Classification of research team members

A researcher might be part of multiple research teams, with varying roles within each. Members of the research team can be categorized according to how the research team is defined.

The original idea of team member classification

The prevailing notion of teams underscores the collaborative efforts between individual team members and their contributions toward achieving research objectives. This study similarly classifies team members based on these dual dimensions.

In terms of overall contributions, members who make substantial contributions are typically seen as pivotal figures within the research team, providing the primary impetus for the team’s productivity. Conversely, those with lesser input only contribute to specific facets of the team’s goals and engage in limited research activities, thus being regarded as standard team members.

In terms of collaboration, it is essential to recognize that high levels of contribution do not inherently denote a core position within a team. The collaboration among team members serves as an important indicator of their identity characteristics within the research team. Based on the collaboration between members, this study believes that researchers who have high contributions and collaborate with many high-contribution team members assume the core members of the research team. Conversely, members who have high contributions but only collaborate with a limited number of high-contribution team members are identified as backbone members. Similarly, members displaying low levels of contributions but collaborating widely with high contributors are categorized as ordinary members. Conversely, those with low contributions and limited collaboration with high-contributing team members are regarded as marginal members of the research team.

Establishment of team member classification criteria

This study introduces Price’s Law and Everett’s Rule to realize the idea of team member classification.

In terms of overall contribution, the well-known bibliometrics Price, drawing from Lotka’s Law, deduced that the number of papers published by prolific scientists is 0.749 times the square root of the number of papers published by the most prolific scientist in a group. Existing research also used this law when analyzing prolific authors of an organization. This study believes that prolific authors who conform to Price’s Law are important members who contribute more to the research team.

In terms of collaboration, existing research mostly employs the concept of factions. Factions refer to a relationship where members reciprocate and cannot readily join new groups without altering the reciprocal nature of their factional ties. However, in real-world settings, relationships with overtly reciprocal characteristics are uncommon. Therefore, to ensure the applicability and stability of the faction, Seidman and Foster ( 1978 ) proposed the concept of K-plex, pointing out that in a group of size n, when the number of direct connections of any point in the group is not less than n-k, this group is called k-plex. For k-plex, as the number k increases, the stability of the entire faction will decrease. Addressing this concern, renowned sociologist Martin Everett ( 2002 ), based on the empirical rule of research, proposed specific values for k and corresponding minimum group sizes, stipulating that the overall team size should not fall below 2k-1 (Scott, 2017 ). The expression is:

In other words, for a K-plex, the most acceptable definition to qualify as a faction is when each member of the team is directly connected to at least ( n  − 1)/2 members of the team. Applied to research teams, this empirical guideline necessitates that team members maintain collaborative ties with at least half or more of the team.

Based on Price’s Law and Everett’s Empirical Rule, this study gives the criteria for distinguishing prolific authors, core members, backbone members, ordinary members, and marginal members of research teams. The specifics are shown in the following Table 1 .

Classification of research teams

Within universities, a diverse array of research teams exists, categorized by their scale, the characteristics of funded projects, and the platforms they rely upon. This study proposes the identification algorithms for project-based teams, individual-based teams, backbone-based groups, and representative groups.

Project-based research teams: identification based on research projects

Traditional methods for identifying research teams attribute co-authorship to collaboration among multiple authors without considering the time scope. However, in practice, collaborations vary in content and duration. Therefore, in the identification process, it is necessary to introduce appropriate standards to distinguish varying degrees of collaboration and content among scholars.

Research projects serve as evidence of researchers engaging in the same research topic, thereby indicating that the paper’s authors belong to the same research team. Upon formal acceptance of a research paper, authors typically append funding information to the paper. Therefore, papers sharing the same funding information can be aggregated into paper clusters to identify the research team members who completed the fund project. The specific steps proposed for identifying a single research project fund are as follows.

Firstly, extract the funding number and regard all papers attached with the same funding number as a paper cluster. Secondly, construct a co-authorship network based on the paper cluster. Thirdly, identify the research team using the team member classification criteria.

Individual-based research teams: team identification based on the first author

For research papers lacking project numbers, clustering can be performed based on the contribution and research experience of the authors. Each co-author of the research paper contributes differently to the paper’s content. In 2014, the Consortia Advancing Standards in Research Administration Information (CASRAI) proposed classification standards for paper contributions, including 14 types such as conceptualization, data processing, formal analysis, funding acquisition, investigation, methods, project management, resources, software, supervision, validation, visualization, paper writing, review, and editing.

In this study, the primary author of a paper lacking project funding is considered the initiator, while other authors are seen as contributors who advance and finalize the research. For papers not affiliated with any project, the first author and all their published papers form a paper group for team identification purposes. The procedure entails the following steps: Initially, gather the first author and all papers authored by them within the identification period to constitute a paper group. Subsequently, a co-authorship network will be constructed using the papers within the group. Lastly, the research team will be identified based on the criteria for classifying team members.

Backbone-based research group: merging based on project-based and individual-based research teams

Research teams can be identified either by a single project number or by individual researchers. Upon identification, it becomes evident that many research teams share similar members. This is because a research team may engage in multiple projects, and some members collaborate without funding support. While identification algorithms are suitable for evaluating the quality of a research article or funding, they may not suffice when assessing the research group, or they may not suffice when assessing the key factors affecting their performance. To address this, it is necessary to merge highly similar individual-based or project-based research teams according to specific criteria. The merged one should be termed a group, as it encompasses multiple project-based and individual-based research teams.

In the pursuit of building world-class universities, governments worldwide often emphasize the necessity of fostering research teams led by discipline backbones. In this vein, this study further develops a backbone-based research group identification algorithm, which considers project-based and individual-based research teams.

Identification of university discipline backbone members

Previous studies have summarized the characteristics of the university discipline backbones, revealing that these individuals often excel in indicators such as degree centrality, eigenvector centrality, and betweenness centrality. Each centrality indicator demonstrates a strong positive correlation with the author’s output volume, indicating that high-productive researchers with more collaborators are more inclined to be university discipline backbones. Based on these characteristics, Price’s law is applied, defining discipline backbone members as researchers whose publications count exceeds 0.749 times the square root of the highest publication count within the discipline.

Team identification with discipline backbone members as the Core

Following the identification of discipline backbones, this study consolidates paper groups wherein the discipline backbone serves as the core member of either individual-based or project-based research teams. Subsequently, backbone-based research groups are formed.

Merging based on similarity perspective

It should be noted that different discipline backbones may simultaneously participate as core members in the same individual-based or project-based research teams. Consequently, distinct backbone-based research groups may encompass duplicate project-based and individual-based research teams, necessitating the merging of backbone-based research groups.

To address this redundancy issue, this study introduces the concept of similarity in community identification. In the community identification process, existing algorithms often assess whether to incorporate members into the community based on their level of similarity. Among various algorithms for calculating similarity, the Jaccard coefficient is deemed to possess superior validity and robustness in merging nodes within network communities (Wang et al., 2020 ). Its calculation formula is as follows.

N i denotes the nodes within subset i , while N j represents the nodes within subset j ; N i  ∩ N j signifies the nodes present in both subsets, whereas N i ∪ N j encompasses all nodes in subsets i and j . Existing research shows that when the Jaccard coefficient equals or exceeds 0.5 (Guo et al., 2022 ), the community identification algorithm achieves optimal precision.

In the context of this study, N i represents the core and backbone members of research group i , while N j denotes the core and backbone members of research group j . If these two groups exhibit significant overlap in core and backbone members, the papers from both research groups are merged into a new set of papers to identify the research team.

Given the efficacy of the Jaccard similarity measure in identifying community networks and merging, this study employs this principle to merge backbone-based research groups. Specifically, groups are merged if the Jaccard similarity coefficient between their core and backbone members equals or exceeds 0.5. Subsequently, new research groups are formed based on the merged set of papers.

It’s important to note that during the merging process, certain research teams within a backbone-based group may be utilized multiple times. Initially, the merging occurs based on the core and backbone members of the backbone-based research group, adhering to the Jaccard coefficient criterion. However, since project or individual-based research teams within a backbone-based research group may be reused, resulting in the similarity of research papers across different groups, the study further tested the team duplication of the merged papers of various groups. During the research process, it was found that the research papers within groups often exhibit similarity due to their association with multiple funding projects. Therefore, a principle of “if connected, then merged” was adopted among groups with highly similar research papers to ensure the heterogeneity of papers within the final merged research groups.

The generation process of the backbone-based research groups is illustrated in Fig. 1 below. Initially, university discipline backbones α, β, γ, θ, δ, and ε are each designated as core members within project-based or individual-based research teams A, B, C, D, E, and F, among which αβγ, γθ, θδ, δε ‘s core and backbone members’ Jaccard coefficient meet the merging standard and generate lines. After the first merging, the Jaccard coefficient of the papers of the αβγ, γθ, θδ, δε are calculated, and the lines are generated because of a high duplicated papers between γθ, θδ, and θδ, δε. Finally, αβγ and γθδε are retained based on the rule.

figure 1

The α, β, γ, θ, δ, and ε are core members within project-based or individual-based research teams. The A, B, C, D, E, and F are project-based or individual-based research teams. From step 1 to step 2, research groups are merged according to the Jaccard coefficient between research team members. From step 2 to step 3, research groups are merged according to the Jaccard coefficient between research group papers.

In summary, the process of identifying a backbone-based research group involves the following steps: (1) Identify prolific authors within the university’s discipline by analyzing all papers published in the field, considering them as the discipline’s backbones members; (2) Merge the project-based and individual-based research teams wherein university discipline backbones are core member, thereby forming backbone-based research groups; (3) Merge the backbone-based research group identified in step (2) based on the Jaccard coefficient between their core and backbone members; (4) Calculate the Jaccard coefficient of the papers of the merged groups in step (3), merge the groups with significant paper overlap, and generate new backbone-based research groups.

The research groups identified through the above steps offer two advantages: Firstly, they integrate similar project-based and individual-based research teams, avoiding redundancy in team identification outcomes. Secondly, the same member may participate in different research teams, assuming distinct roles within each, thus better reflecting the complexity of scientific research practices.

Representative team: consolidation via backbone-based research group

When universities introduce their research groups to external parties, they typically highlight the most significant research members within the institution. Although the backbone-based research group has condensed the project-based and individual-based research teams, there may still be some overlap among members from different backbone-based research groups.

In order to create condensed and representative research groups that accurately reflect the development of the university’s discipline, this study extracts the core and backbone members identified in the backbone-based research group. It then identifies the representative group using the widely utilized Louvain algorithm (Blondel et al., 2008 ) commonly employed in research group identification. This algorithm facilitates the integration of important members from different backbone-based research groups while ensuring there is no redundancy among group members. The merging process is shown in Fig. 2 .

figure 2

Each pass is made of two phases: one where modularity is optimized by allowing only local changes of communities, and one where the communities found are aggregated in order to build a new network of communities. The passes are repeated iteratively until no increase in modularity is possible.

Research team identification process and its pros and cons

Overall, the method of identifying university research teams proposed in this research encompasses four stages: Initially, research teams are categorized into project-based research teams and individual-based research teams based on information provided with research papers, distinguishing between those supported by funding projects and those not. Subsequently, the prolific authors of universities are identified to combine individual-based and project-based research teams, and backbone-based research groups are generated. Finally, representative research groups are established utilizing the Louvain algorithm and the interrelations among members within the backbone-based research groups. The entire process is depicted in Fig. 3 below.

figure 3

Different university research teams are identified at different stage.

Each type of research team or group has its advantages and disadvantages, as shown in Table 2 below.

Validation of identification results

In order to verify the accuracy of the identification results, the method proposed by Boyack and Klavans ( 2014 ), which relies on citation analysis, is utilized. This method calculates the level of consistency regarding the main research areas of the core and backbone members, thereby verifying the validity of the identification method.

In the SCIVAL database, all research papers are clustered into relevant topic groups, providing insights into the research area of individual authors. By examining the research topic clusters of team papers in the SCIVAL database, the predominant research areas of prolific authors can be determined. Authors sharing common research areas within a university are regarded as constituting a research team. Given that authors often conduct research in various research areas, this study focuses solely on the top three research areas for each author.

As demonstrated in Table 3 below, for the prolific authors A, B, C, D, and E of the research team, their top three research areas collectively span five distinct fields. By calculating the highest value of the consistency among these research areas, it can be judged whether these researchers can be classified as members of the same research group. As depicted in Table 3 , the main research areas of all prolific authors include Research Area 3, indicating that this field is one of the three most important research areas for all prolific authors. This consistency validates that the main research areas of the five authors align, affirming their classification within the same research team.

Data collection and preprocessing

In order to present the distinct characteristics of various types of scientific research teams as intuitively as possible, this study focuses on the field of material science, with Tsinghua University and Nanyang Technological University selected for analysis. The selection of these two institutions is driven by several considerations: (1) both universities boast exceptional performance in the field of material science on a global scale, consistently ranking within the top 10 worldwide for numerous years; (2) The scientific research systems in the respective countries where these universities are situated differ significantly. China’s scientific research system operates under a government-led funding model, whereas Singapore’s system involves a multi-party funding approach with contributions from the government, enterprises, and societies. By examining universities from these distinct scientific research cultures, this study aims to validate the proposed methods and highlight disparities in the characteristics of their scientific research teams. (3) Material science is inherently interdisciplinary, with contributions from researchers across various domains. Although the selected papers focus on material science, they may also intersect with other disciplines. Therefore, investigating research teams in material science could somewhat represent the interdisciplinary research teams.

The data utilized in this study is sourced from the Clarivate Analytics database, which categorizes scientific research papers based on the subject classification catalogs. In order to ensure the consistency and reliability of scientific research paper identification, this study focuses on the papers published in the field of material science by the two selected universities between 2017 and 2021. Additionally, considering the duration of funded projects, papers associated with projects that have appeared in 2017–2021 within ten years (2011–2022) are also included for analysis to enhance the precision of identification. In order to ensure the affiliation of a research team with the respective universities, this study exclusively considers papers authored by the first author or the corresponding author affiliated with the university as the subject of analysis.

Throughout this process, it should be noted that the name problem in identifying scientific research. Abbreviations, orders, and other name-related information are cleaned and verified. Given that this study exports data utilizing the Author’s Full name and restricts it to specific universities and disciplines, the cleaning process targets the rectification of identification discrepancies arising from a minority of abbreviations and similar names. The specific cleaning procedures entail the following steps.

First, all occurrences of “-” are replaced with null values, and names are standardized by capitalization. Second, the Python dedupe module is employed to mitigate ambiguity in author names, facilitating the differentiation or unification of authors sharing the same surname, name, and initials. List and output all personnel names of each university in this discipline and observe in ascending order. Third, a comparison of names and abbreviations is conducted in reverse order, alongside their respective affiliations and replacements in the identification data. For example, names such as “LONG, W.H” “LONG, WEN, HUI” and “LONG, WENHUI” are uniformly replaced with “LONG, WENHUI.” Fourth, identify and compare similar names in both abbreviations and full forms and confirm whether they are consistent by scrutinizing their affiliations and collaborators. Names exhibiting consistency are replaced accordingly, while those lacking uniformity remain unchanged. For example, “LI, W.D” and “LI, WEIDE” lacking common affiliations and collaborators, are not considered the same person and thus remain distinct.

The publication of the two universities in the field of Materials Science and Engineering across two distinct time periods is shown in Table 4 below.

Based on the publication count of papers authored by the first author or corresponding author from both universities, Tsinghua University demonstrates a significantly higher publication output than Nanyang Technological University, indicating a substantial disparity between the two institutions.

Subsequent to data preprocessing, this study uses the Python tool to develop algorithms in accordance with the proposed principles, thereby facilitating the identification of research teams and groups.

This study has identified several research teams through the sorting and analysis of original data. In order to provide a comprehensive overview of the identification results, this study begins by outlining the characteristics of the identification results and then analyzes the research teams affiliated with both universities, focusing on three aspects: scale, structure, and output.

Identification results of university research teams

The results reveal that both Tsinghua University and Nanyang Technological University boast a considerable number of Pbrts, indicating that most of the researchers from both universities have received funding support. Additionally, a small number of teams have not received funding support, although their overall proportion is relatively low. The Bbrgs predominantly encompass the majority of the Ibrts and Pbrts, underscoring the significant influence of the discipline backbone members within both universities. Notably, the total count of Rrg across the two universities stands at 39, reflecting that many research groups are supporting the construction of material disciplines in the two universities (Table 5 ).

In order to validate the accuracy of the developed method, this study verifies the effectiveness of the identification algorithm. Given that the method emphasizes the main research area of its members, it is appropriate to apply it to the verification of the Bbrgs, which encompass the majority of the individual-based and project-based teams.

The analysis reveals that the consistency level of the most concentrated research area within the identified Bbrgs is 0.93. This signifies that within a Bbrg comprising 10 core or backbone members, a minimum of 9.3 individuals share the same main research area. Moreover, across Bbrgs of varying sizes, the average consistency level of the most concentrated research area also reached 0.90, indicating that the algorithm proposed in this study is valid (Table 6 ).

Analysis of the characteristics of Bbrg in universities

The findings of the analysis show that the Bbrgs encompass the vast majority of Pbrts and Ibrts within universities. Consequently, this study further analyzes the scale, structure, and output of the Bbrgs to present the characteristics of university research teams.

Group scale

Upon scrutinizing the distribution of Bbrgs across the two universities, it is observed that the number of core members is similar. Bbrg with a core member scale of 6–10 individuals are the most prevalent, followed by those with a scale of 0–5 members. Additionally, there are Bbrgs comprising 11–15 members, with relatively fewer Bbrgs consisting of 15 members or more. On average, the number of core members in Bbrgs stands at 7.08. Tsinghua University has more Bbrgs than Nanyang Technological University, while the average number of core members is relatively less. Notably, the proportion of core and backbone members amounts to nearly 12%, ranging from 11.22% to 13.88% (Table 7 ).

Group structure

The structural attributes of the research groups could be assessed through network density among core members, core and backbone members, and all team members. Additionally, departmental distribution can be depicted based on the identification of core members and their organizational affiliations. The formula for network density calculation is as follows:

Note : R is the number of relationships, and N is the number of members.

Overall, the network density characteristics exhibit consistency across both universities. Specifically, the network density among research group members tends to decrease as the group size expands. The network density among core members is the highest, while that among all members records the lowest. Comparatively, the average amount of various types of network density at Tsinghua University is relatively lower than that at Nanyang Technological University, indicating a lesser degree of connectivity among members within Tsinghua University’s research group. However, the network density levels among core members and core and backbone members of research teams in both institutions remain relatively high. Notably, the network density of backbone-based research groups exceeds 0.5, indicating a close collaboration among the core and backbone members of these university research groups (Table 8 ).

The T-test analysis reveals no significant difference in the network density among core members between Tsinghua University and Nanyang Technological University. This suggests that core members of research groups from universities with high-level discipline often maintain close communication. However, concerning the network density among core and backbone members and all members, the average amount of Tsinghua University’s research groups is significantly lower than those of Nanyang Technological University. This implies less direct collaboration among prolific authors at Tsinghua University, with backbone members relying more on different core members of the group to carry out research.

To present the cooperative relationship among the core and backbone members of the Bbrgs, the prolific authors associated with the backbone-based research groups are extracted. Subsequently, the representative research groups affiliated with Nanyang Technological University and Tsinghua University are identified using the fast-unfolding algorithm. The resultant collaboration network diagram among prolific authors is depicted in Fig. 4 , wherein each node color corresponds to different representative research groups of the respective universities.

figure 4

Nodes (author) and links (relation between different authors) with the same color could be seen as the same representative research group.

The network connection diagram of Nanyang Technological University illustrates the presence of 39 Rrgs, including Rrgs from the School of Materials Science and Engineering and the Singapore Centre for 3D Printing. Owing to the inherently interdisciplinary characteristics of the materials discipline, its research groups are not only distributed in the School of Materials Science and Engineering; other academic units also have research groups engaged in materials science research.

Further insights into the distribution of research groups can be gleaned by examining the departments to which the primary members belong. Counting the departmental affiliations of the members with the highest centrality in each representative team reveals that, among the 39 Rrgs, the School of Materials Science and Engineering and the College of Engineering boast the highest number of affiliations, with nine core members of the research groups coming from these two departments, Following closely is the School of Physical and Mathematical Sciences. Notably, entities external to the university, such as the National Institute of Education and the Singapore Institute of Manufacturing Technology, also host important representative groups, underscoring the interdisciplinarity nature of material science. The distribution of Rrgs affiliations is delineated in Table 9 .

Similar to Nanyang Technological University, Tsinghua University also exhibits tightly woven connections within its backbone-based research group in Materials Science and Engineering, comprising a total of 39 Rrgs. Compared with Nanyang Technological University, Tsinghua University boasts a larger cohort of core and backbone members. The collaboration network diagram of representative groups is shown below (Fig. 5 ).

figure 5

Similar to Nanyang Technological University, representative research groups at Tsinghua University are distributed in different schools within the institution, with the School of Materials being the directly related department. In addition, the School of Medicine and the Center for Brain-like Computing also conduct research related to materials science (Table 10 ).

By summarizing the departmental affiliations of the research groups, it becomes evident that the Rrgs in Materials Science and Engineering at these universities span various academic departments, reflecting the interdisciplinary characteristics of the field. The network density of the research groups is also calculated, with Nanyang Technological University exhibiting a higher density (0.028) compared to Tsinghua University (0.022), indicating tighter connections within the representative research groups at Nanyang Technological University.

Group output

In order to control the impact of scale, this study compares several metrics, including publication, publication per capita of core and backbone members, capita of the most prolific author within the groups, field-weighted citation impact, and citations per publication of Bbrgs at these two top universities.

Regarding publications, the average number and the T-test results show that Tsinghua University significantly outperforms Nanyang Technological University, suggesting that the Bbrgs and prolific authors affiliated with Tsinghua University are more productive in terms of research output.

However, in terms of field-weighted citation impact and citations per publication of the Bbrgs, the average number and the T-test results show that Tsinghua University is significantly lower than that of Nanyang Technological University, which indicates the research papers originating from the Bbrgs at Nanyang Technological University have a greater academic influence (see Table 11 ).

Typical cases

To intuitively present the research groups identified, this study has selected the two Bbrgs with the highest number of published papers at Tsinghua University and Nanyang Technological University for analysis, aiming to offer insights for constructing research teams.

Basic Information of the Bbrgs

Examining the basic information of the Bbrgs reveals that although Kang Feiyu’s group at Tsinghua University comprises fewer researchers than Liu Zheng’s group at Nanyang Technological University, Kang Feiyu’s group has a higher total number of published papers. In order to measure the performance of the research results of these two Bbrgs, the field-weighted citation impact of their research papers was queried using SCIVAL. The results showed that the field-weighted citation impact of Kang Feiyu’s group at Tsinghua University was higher, indicating a greater influence in the field of Materials Science and Engineering. Furthermore, the identity information of the two group leaders was compared. It was found that Kang Feiyu, in addition to being a professor at Tsinghua University, holds administrative positions as the dean of the Shenzhen Graduate School of Tsinghua University. Meanwhile, LIU, Zheng, mainly serves as the chairman of the Singapore Materials Society alongside his role as a professor (see Table 12 ).

Characteristics of team member network structure

In order to reflect the collaboration characteristics of research groups, this study calculates the network density of the two groups and utilizes VOSviewer to present the collaboration network diagrams of their members.

In terms of network density, both groups exhibit a density of 1 among core members, indicating that the collaboration between core members is tight. However, regarding the network density of core and backbone members, as well as all members, Liu Zheng’s group at Nanyang Technological University demonstrates a higher density. This indicates a stronger interconnectedness between the backbone and other members within the group (refer to Table 13 ).

For the co-authorship network diagram of group members, distinctive characteristics are observed between the two Bbrgs. In Kang Feiyu’s team, the core members exhibit prominence, with sub-team structures under evident each team member (Fig. 6 ). Conversely, while Liu Zheng’s team also features different core members, the centrality within each member is not obvious (Fig. 7 ).

figure 6

Nodes (author) and links (relation between different authors) with the same color could be seen as the same sub-team.

figure 7

Discussion and conclusion

Distinguishing different research teams constitutes the foundational stage in conducting team science research. In this study, we employ Price’s Law, Everett’s Rule, Jaccard Similarity Coefficient, and Louvain Algorithm to identify different research teams and groups in two world-leading universities specializing in Materials Science and Engineering. Through this exploration, we aim to explore the characteristics of research teams. The main findings are discussed as follows.

First, based on the co-authorship and project data from scholarly articles, this study develops a methodology for identifying research teams that distinguishes between different types of research teams or groups. In contrast to the prior identification method, our algorithms could identify different types of research teams and realize the member classification within research teams. This affords greater clarity regarding collaboration time and content among team members. The validation of identification results, conducted using the methodology proposed by Boyack and Klavans ( 2014 ), demonstrates the consistency of the main research areas among identified research group members. This validation shows the accuracy and efficacy of the research team identification methodology proposed in this study.

Second, universities have different types of research teams or groups, encompassing both project-based research teams and individual-based research teams lacking project support. Among these, most research teams rely on projects to conduct research (Bloch & Sørensen, 2015 ). Concurrently, this research finds that university research groups predominantly coalesce around eminent scholars, with backbone-based research groups comprising the majority of both project-based and individual-based research teams. This phenomenon shows the concentration of research resources within a select few research groups and institutions, a concept previously highlighted by Mongeon et al. ( 2016 ), who pointed out that research funding tends to be concentrated among a minority of researchers. In this research, we not only corroborate this assertion but also observe that researchers with abundant funding collaborate to form research groups, thereby mutually supporting each other. In addition, based on the structures of research groups at Nanyang Technological University and Tsinghua University, one could posit that these institutions resemble what might be termed a “rich club” (Ma et al., 2015 ). However, despite the heightened productivity of relatively concentrated research groups at Tsinghua University in terms of research output, their academic influence pales compared to that of Nanyang Technological University. To enhance research influence, it seems that the funding agency should curtail funding allocations to these “rich” research groups and instead allocate resources to support more financially challenged research teams. This approach would serve to alleviate the trend of concentration in research project funding, as suggested by Aagaard et al. ( 2020 ).

Thirdly, research groups in Material Science and Engineering exhibit obvious interdisciplinary characteristics. Despite all research papers being classified under the Material Science and Engineering discipline, the distribution of research groups across various academic departments suggests a pervasive interdisciplinary nature. This phenomenon underscores the interconnectedness of Materials Science and Engineering with other disciplines and serves as evidence that members from diverse departments within high-caliber universities actively engage in collaborative efforts. Previous research conducted in the United Kingdom has revealed that interdisciplinary researchers from arts and humanities, biology, economics, engineering and physics, medicine, environmental sciences, and astronomy occupy a pivotal position in academic collaboration and can obtain more funding (Sun et al., 2021 ). In this research, similar conclusions are also found in Material Science and Engineering.

Fourth, the personnel structure distribution in university research groups adheres to Price’s Law, wherein prolific authors are a small part of the group members, with approximately 20% of individuals contributing to 80% of the work. Backbone-based research groups, comprising predominantly project-based and individual-based research teams in universities, typically exhibit a core and backbone members ratio of approximately 10%–15%, aligning with Price’s Law. Peterson ( 2018 ) also pointed out that Price’s Law is almost universally present in all creative work. Scientific research relies more on innovative thinking and collaboration among researchers, and the phenomenon was first confirmed within university research groups. Besides, systematic research activities require many researchers to participate, but few people make important intellectual support and contributions. In practical research endeavors, principal researchers, such as professors and associate professors, often exhibit higher levels of innovation and stability, while graduate students and external support staff tend to be more transient, engaging in foundational research tasks.

Fifth, regarding the research group with the highest publication count of the two universities, Tsinghua University has more core members, highlighting the research model centered around a single scholar, while Nanyang Technological University exhibits a more dispersed distribution of researchers. This discrepancy may be attributed to differences in the university’s system. In China, valuable scientific research often unfolds under the leadership of authoritative scholars, typically holding multiple administrative roles, thus exhibiting hierarchical centralization within the group. This hierarchical structure aligns with Merton’s Sociology of Science ( 1973 ), positing that the higher the position of scientists, the higher their status in the hierarchy, facilitating increased funding acquisition and research impact. Conversely, Singapore’s research system is more like that of developed countries such as the UK and the US, fostering a more democratic culture where communication among members is more open. This relatively flat team culture is conducive to generating high-level research outcomes (Xu et al., 2022 ). However, concerning the field-weighted citation impact of research group papers, the Chinese backbone-based research group outperforms in both publication volume and academic influence, suggesting that this organizational characteristic is more suitable for China and is more conducive to doing research with stronger academic influence.

The research teams and groups in these top two universities offer insights for constructing science teams: Firstly, the university should prioritize individual-based research teams to enhance the academic influence of their research. Secondly, intra-university research teams should foster collaboration across different departments to promote interdisciplinary research, contributing to the advancement of the discipline. Thirdly, emphasis should be placed on supporting core and backbone members who often generate innovative ideas and contribute more to the academic community. Fourth, the research team should cultivate a suitable research atmosphere according to their cultural background, whether centralized or democratic, to harness researchers’ strengths effectively.

This research proposes a method for identifying university research teams and analyzing the characteristics of such teams at the top two universities. In the future, further exploration into the role of different team members and the development of more effective research team construction strategies are warranted.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. The data about the information of research papers authored by the two universities and the identification results of the members of university research teams are shared.

Aagaard K, Kladakis A, Nielsen MW (2020) Concentration or dispersal of research funding? Quant Sci Stud 1(1):117–149. https://doi.org/10.1162/qss_a_00002

Article   Google Scholar  

Abramo G, D’Angelo CA, Di Costa F (2017) Do interdisciplinary research teams deliver higher gains to science? Scientometrics 111:317–336. https://doi.org/10.1007/s11192-017-2253-x

Barjak F, Robinson S (2008) International collaboration, mobility and team diversity in the life sciences: impact on research performance. Soc Geogr 3(1):23–36. https://doi.org/10.5194/sg-3-23-2008

Boardman C, Ponomariov B (2014) Management knowledge and the organization of team science in university research centers. J Technol Transf 39:75–92. https://doi.org/10.1007/s10961-012-9271-x

Boyack KW, Klavans R (2014) 12 Identifying and Quantifying Research Strengths Using Market Segmentation. In: Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact, 225, MIT Press, Cambridge

Bozeman B, Youtie J (2018) The strength in numbers: The new science of team science. Princeton University Press. https://doi.org/10.1515/9781400888610

Bloch C, Sørensen MP (2015) The size of research funding: Trends and implications. Sci Public Policy 42(1):30–43. https://doi.org/10.1093/scipol/scu019

Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008

Coles NA, Hamlin JK, Sullivan LL, Parker TH, Altschul D (2022) Build up big-team science. Nature 601(7894):505–507. https://doi.org/10.1038/d41586-022-00150-2

Article   ADS   CAS   PubMed   Google Scholar  

Dino H, Yu S, Wan L, Wang M, Zhang K, Guo H, Hussain I (2020) Detecting leaders and key members of scientific teams in co-authorship networks. Comput Electr Eng 85:106703. https://doi.org/10.1016/j.compeleceng.2020.106703

Deng H, Breunig H, Apte J, Qin Y (2022) An early career perspective on the opportunities and challenges of team science. Environ Sci Technol 56(3):1478–1481. https://doi.org/10.1021/acs.est.1c08322

Everett M (2002) Social network analysis. In: Textbook at Essex Summer School in SSDA, 102, Essex Summer School in Social Science Data Analysis, United Kingdom

Forscher PS, Wagenmakers EJ, Coles NA, Silan MA, Dutra N, Basnight-Brown D, IJzerman H (2023) The benefits, barriers, and risks of big-team science. Perspect Psychological Sci 18(3):607–623. https://doi.org/10.1177/17456916221082970

Guo K, Huang X, Wu L, Chen Y (2022) Local community detection algorithm based on local modularity density. Appl Intell 52(2):1238–1253. https://doi.org/10.1007/s10489-020-02052-0

Hu Z, Lin A, Willett P (2019) Identification of research communities in cited and uncited publications using a co-authorship network. Scientometrics 118:1–19. https://doi.org/10.1007/s11192-018-2954-9

Imran F, Abbasi RA, Sindhu MA, Khattak AS, Daud A, Amjad T (2018) Finding research areas of academicians using clique percolation. In 2018 14th International Conference on Emerging Technologies (ICET). IEEE, pp 1–6. https://doi.org/10.1109/ICET.2018.8603549

Lee HJ, Kim JW, Koh J, Lee Y (2008) Relative Importance of Knowledge Portal Functionalities: A Contingent Approach on Knowledge Portal Design for R&D Teams. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008). IEEE, pp 331–331, https://doi.org/10.1109/HICSS.2008.373

Liao Q (2018) Research Team Identification and Influence Factors Analysis of Team Performance. M. A. Thesis. Beijing Institute of Technology, Beijing

Google Scholar  

Li Y, Tan S (2012) Research on identification and network analysis of university research team. Sci Technol Prog policy 29(11):147–150

Li G, Liu M, Wu Q, Mao J (2017) A Research of Characters and Identifications of Roles Among Research Groups Based on the Bow-Tie Model. Libr Inf Serv 61(5):87–94

Liu Y, Wu Y, Rousseau S, Rousseau R (2020) Reflections on and a short review of the science of team science. Scientometrics 125:937–950. https://doi.org/10.1007/s11192-020-03513-6

Lungeanu A, Huang Y, Contractor NS (2014) Understanding the assembly of interdisciplinary teams and its impact on performance. J Informetr 8(1):59–70. https://doi.org/10.1016/j.joi.2013.10.006

Article   PubMed   PubMed Central   Google Scholar  

Lv L, Zhao Y, Wang X, Zhao P (2016) Core R&D Team Recognition Method Based on Association Rules Mining. Sci Technol Manag Res 36(17):148–152

Ma A, Mondragón RJ, Latora V (2015) Anatomy of funded research in science. Proc Natl Acad Sci 112(48):14760–14765. https://doi.org/10.1073/pnas.1513651112

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Merton RK (1973) The sociology of science: Theoretical and empirical investigations. University of Chicago Press, Chicago

Mongeon P, Brodeur C, Beaudry C, Larivière V (2016) Concentration of research funding leads to decreasing marginal returns. Res Eval 25(4):396–404. https://doi.org/10.1093/reseval/rvw007

National Research Council (2015) Enhancing the effectiveness of team science. The National Academies Press, Washington, DC

Okamoto J, Centers for Population Health and Health Disparities Evaluation Working Group (2015) Scientific collaboration and team science: a social network analysis of the centers for population health and health disparities. Transl Behav Med 5(1):12–23. https://doi.org/10.1007/s13142-014-0280-1

Article   PubMed   Google Scholar  

Peterson JB (2018) 12 rules for life: An antidote to chaos. Random House, Canada

Scott J (2017) Social network analysis. Sage Publications Ltd, London

Seidman SB, Foster BL (1978) A graph‐theoretic generalization of the clique concept. J Math Sociol 6(1):139–154. https://doi.org/10.1080/0022250X.1978.9989883

Article   MathSciNet   Google Scholar  

Sun Y, Livan G, Ma A, Latora V (2021) Interdisciplinary researchers attain better long-term funding performance. Commun Phys 4(1):263. https://doi.org/10.1038/s42005-021-00769-z

Stokols D, Hall KL, Taylor BK, Moser RP (2008) The science of team science: overview of the field and introduction to the supplement. Am J Prev Med 35(2):S77–S89. https://doi.org/10.1016/j.amepre.2008.05.002

Wang C, Cheng Z, Huang Z (2017) Analysis on the co-authoring in the field of management in China: based on social network analysis. Int J Emerg Technol Learn 12(6):149. https://doi.org/10.3991/ijet.v12i06.7091

Wang T, Chen S, Wang X, Wang J (2020) Label propagation algorithm based on node importance. Phys A Stat Mech Appl. 551:124137. https://doi.org/10.1016/j.physa.2020.124137

Wu L, Wang D, Evans JA (2019) Large teams develop and small teams disrupt science and technology. Nature 566(7744):378–382. https://doi.org/10.1038/s41586-019-0941-9

Xu F, Wu L, Evans J (2022) Flat teams drive scientific innovation. Proc. Natl Acad. Sci 119(23):e2200927119. https://doi.org/10.1073/pnas.2200927119

Article   CAS   PubMed   PubMed Central   Google Scholar  

Yu H, Bai K, Zou B, Wang Y (2020) Identification and Extraction of Research Team in the Artificial Intelligence Field. Libr Inf Serv 64(20):4–13

Yu Y, Dong C, Han H, Li Z (2018) The Method of Research Teams Identification Based on Social Network Analysis:Identifying Research Team Leaders Based on Iterative Betweenness Centrality Rank Method. Inf Stud Theory Appl 41(7):105–110

Zhao L, Zhang Q, Wang L (2014) Benefit distribution mechanism in the team members’ scientific research collaboration network. Scientometrics 100:363–389. https://doi.org/10.1007/s11192-014-1322-7

Zhang M, Jia Y, Wang N, Ge S (2019) Using Relative Tie Strength to Identify Core Teams of Scientific Research. Int J Emerg Technol Learn 14(23):33–54. https://www.learntechlib.org/p/217243/

Download references

Author information

Authors and affiliations.

School of Education, Central China Normal University, Wuhan, PR China

Zhe Cheng & Yihuan Zou

Faculty of Education, The Chinese University of Hong Kong, Hong Kong SAR, PR China

Yueyang Zheng

You can also search for this author in PubMed   Google Scholar

Contributions

Zhe Cheng contributed to the study conception, research design, data collection, and data analysis. Zhe Cheng wrote the first draft of the manuscript. Yihuan Zou made the last revisions. Yihuan Zou and Yueyang Zheng supervised, proofread, and commented on previous versions of this manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yueyang Zheng .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by the authors.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cheng, Z., Zou, Y. & Zheng, Y. A method for identifying different types of university research teams. Humanit Soc Sci Commun 11 , 523 (2024). https://doi.org/10.1057/s41599-024-03014-4

Download citation

Received : 03 August 2023

Accepted : 28 March 2024

Published : 18 April 2024

DOI : https://doi.org/10.1057/s41599-024-03014-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

type of research based on method

Utility Menu

  • Request an Appointment

Popular Searches

  • Adult Primary Care
  • Orthopedic Surgery

Boston Medical Center Study Furthers Understanding of Lung Regeneration

BOSTON - Researchers at Boston Medical Center (BMC) and Boston University (BU) today announced findings from a new research study, published in Cell Stem Cell, detailing the development of a method for generating human alveolar epithelial type I cells (AT1s) from pluripotent stem cells (iPSCs). The ability to recreate these cells in an iPSC-based model will allow researchers to analyze the historically difficult to isolate cells in greater detail, helping to further the understanding of human lung regeneration and may ultimately expedite progress in treatment and therapeutic options for people living with pulmonary diseases.

Pulmonary diseases, including pulmonary fibrosis and chronic obstructive pulmonary disease (COPD), cause significant mortality and morbidity worldwide, and many pulmonary diseases lack sufficient treatment options. As science and medicine have progressed, researchers have identified a clear need for additional knowledge about lung cells to help improve patient health. 

The results of this study provide an in vitro model of human AT1 cells, which line the vast majority of the gas exchange barrier of the distal lung, and are a potential source of human AT1s to develop regenerative therapies. The new model will help researchers of pulmonary diseases deepen their understanding of lung regeneration, specifically after an infection or exposure to toxins, as well as diseases of the alveolar epithelium such as acute respiratory distress syndrome (ARDS) and pulmonary fibrosis. 

“Uncovering the ability to generate human alveolar epithelial type I cells (AT1s), and similar cell types, from pluripotent stem cells (iPSCs), has expanded our knowledge of biological processes and can significantly improve disease understanding and management,” said Darrell Kotton, MD, Director, Center for Regenerative Medicine (CReM) of Boston University and Boston Medical Center.

This new study also furthers the CReM’s goal of generating every human lung cell type from iPSCs as a pathway to improving disease management and provides a source of cells for future transplantation to regenerate damaged lung tissues in vivo. 

“We know that the respiratory system can respond to injury and regenerate lost or damaged cells, but the depth of that knowledge is currently limited,” said Claire Burgess, PhD, Boston University Chobanian and Avedisian School of Medicine, who is the study’s first author. “We anticipate this protocol will be used to further understand how AT1 cells react to toxins, bacteria, and viral exposures, and will be used in basic developmental studies, disease modeling, and potential engineering of future regenerative therapies."

The full study is published in Cell Stem Cell and can be located  here .

About Boston Medical Center

Boston Medical Center models a new kind of excellence in healthcare, where innovative and equitable care empowers all patients to thrive. We combine world-class clinicians and cutting-edge treatments with compassionate, quality care that extends beyond our walls. As an award-winning health equity leader, our diverse clinicians and staff interrogate racial disparities in care and partner with our community to dismantle systemic inequities. And as a national leader in research and the teaching affiliate for Boston University Chobanian & Avedisian School of Medicine, we’re driving the future of care.

Media Contact:

Please reach out to the  Boston Medical Center Media Relations team with any questions.

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why publish with this journal?
  • About Bioinformatics
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Meg-ppis: a fast protein-protein interaction site prediction method based on multi-scale graph information and equivariant graph neural network.

  • Article contents
  • Figures & tables
  • Supplementary Data

Hongzhen Ding, Xue Li, Peifu Han, Xu Tian, Fengrui Jing, Shuang Wang, Tao Song, Hanjiao Fu, Na Kang, MEG-PPIS: a fast protein-protein interaction site prediction method based on multi-scale graph information and equivariant graph neural network, Bioinformatics , 2024;, btae269, https://doi.org/10.1093/bioinformatics/btae269

  • Permissions Icon Permissions

Protein-protein interaction sites (PPIS) are crucial for deciphering protein action mechanisms and related medical research, which is the key issue in protein action research. Recent studies have shown that graph neural networks have achieved outstanding performance in predicting PPIS. However, these studies often neglect the modeling of information at different scales in the graph and the symmetry of protein molecules within three-dimensional space.

In response to this gap, this paper proposes the MEG-PPIS approach, a PPIS prediction method based on multi-scale graph information and E(n) equivariant graph neural network (EGNN). There are two channels in MEG-PPIS: the original graph and the subgraph obtained by graph pooling. The model can iteratively update the features of the original graph and subgraph through the weight-sharing EGNN. Subsequently, the max-pooling operation aggregates the updated features of the original graph and subgraph. Ultimately, the model feeds node features into the prediction layer to obtain prediction results. Comparative assessments against other methods on benchmark datasets reveal that MEG-PPIS achieves optimal performance across all evaluation metrics and gets the fastest runtime. Furthermore, specific case studies demonstrate that our method can predict more true positive and true negative sites than the current best method, proving that our model achieves better performance in the PPIS prediction task.

The data and code are available at https://github.com/dhz234/MEG-PPIS.git .

Supplementary data

Email alerts, citing articles via, looking for your next opportunity.

  • Recommend to your Library

Affiliations

  • Online ISSN 1367-4811
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

This paper is in the following e-collection/theme issue:

Published on 19.4.2024 in Vol 26 (2024)

Psychometric Evaluation of a Tablet-Based Tool to Detect Mild Cognitive Impairment in Older Adults: Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Josephine McMurray 1, 2 * , MBA, PhD   ; 
  • AnneMarie Levy 1 * , MSc, PhD   ; 
  • Wei Pang 1, 3 * , BTM   ; 
  • Paul Holyoke 4 , PhD  

1 Lazaridis School of Business & Economics, Wilfrid Laurier University, Brantford, ON, Canada

2 Health Studies, Faculty of Human and Social Sciences, Wilfrid Laurier University, Brantford, ON, Canada

3 Biomedical Informatics & Data Science, Yale University, New Haven, CT, United States

4 SE Research Centre, Markham, ON, Canada

*these authors contributed equally

Corresponding Author:

Josephine McMurray, MBA, PhD

Lazaridis School of Business & Economics

Wilfrid Laurier University

73 George St

Brantford, ON, N3T3Y3

Phone: 1 548 889 4492

Email: [email protected]

Background: With the rapid aging of the global population, the prevalence of mild cognitive impairment (MCI) and dementia is anticipated to surge worldwide. MCI serves as an intermediary stage between normal aging and dementia, necessitating more sensitive and effective screening tools for early identification and intervention. The BrainFx SCREEN is a novel digital tool designed to assess cognitive impairment. This study evaluated its efficacy as a screening tool for MCI in primary care settings, particularly in the context of an aging population and the growing integration of digital health solutions.

Objective: The primary objective was to assess the validity, reliability, and applicability of the BrainFx SCREEN (hereafter, the SCREEN) for MCI screening in a primary care context. We conducted an exploratory study comparing the SCREEN with an established screening tool, the Quick Mild Cognitive Impairment (Qmci) screen.

Methods: A concurrent mixed methods, prospective study using a quasi-experimental design was conducted with 147 participants from 5 primary care Family Health Teams (FHTs; characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. Participants included health care practitioners, patients, and FHT administrative executives. Individuals aged ≥55 years with no history of MCI or diagnosis of dementia rostered in a participating FHT were eligible to participate. Participants were screened using both the SCREEN and Qmci. The study also incorporated the Geriatric Anxiety Scale–10 to assess general anxiety levels at each cognitive screening. The SCREEN’s scoring was compared against that of the Qmci and the clinical judgment of health care professionals. Statistical analyses included sensitivity, specificity, internal consistency, and test-retest reliability assessments.

Results: The study found that the SCREEN’s longer administration time and complex scoring algorithm, which is proprietary and unavailable for independent analysis, presented challenges. Its internal consistency, indicated by a Cronbach α of 0.63, was below the acceptable threshold. The test-retest reliability also showed limitations, with moderate intraclass correlation coefficient (0.54) and inadequate κ (0.15) values. Sensitivity and specificity were consistent (63.25% and 74.07%, respectively) between cross-tabulation and discrepant analysis. In addition, the study faced limitations due to its demographic skew (96/147, 65.3% female, well-educated participants), the absence of a comprehensive gold standard for MCI diagnosis, and financial constraints limiting the inclusion of confirmatory neuropsychological testing.

Conclusions: The SCREEN, in its current form, does not meet the necessary criteria for an optimal MCI screening tool in primary care settings, primarily due to its longer administration time and lower reliability. As the number of digital health technologies increases and evolves, further testing and refinement of tools such as the SCREEN are essential to ensure their efficacy and reliability in real-world clinical settings. This study advocates for continued research in this rapidly advancing field to better serve the aging population.

International Registered Report Identifier (IRRID): RR2-10.2196/25520

Introduction

Mild cognitive impairment (MCI) is a syndrome characterized by a slight but noticeable and measurable deterioration in cognitive abilities, predominantly memory and thinking skills, that is greater than expected for an individual’s age and educational level [ 1 , 2 ]. The functional impairments associated with MCI are subtle and often impair instrumental activities of daily living (ADL). Instrumental ADL include everyday tasks such as managing finances, cooking, shopping, or taking regularly prescribed medications and are considered more complex than ADL such as bathing, dressing, and toileting [ 3 , 4 ]. In cases in which memory impairment is the primary indicator of the disease, MCI is classified as amnesic MCI and when significant impairment of non–memory-related cognitive domains such as visual-spatial or executive functioning is dominant, MCI is classified as nonamnesic [ 5 ].

Cognitive decline, more so than cancer and cardiovascular disease, poses a substantial threat to an individual’s ability to live independently or at home with family caregivers [ 6 ]. The Centers for Disease Control and Prevention reports that 1 in 8 adults aged ≥60 years experiences memory loss and confusion, with 35% reporting functional difficulties with basic ADL [ 7 ]. The American Academy of Neurology estimates that the prevalence of MCI ranges from 13.4% to 42% in people aged ≥65 years [ 8 ], and a 2023 meta-analysis that included 233 studies and 676,974 participants aged ≥50 years estimated that the overall global prevalence of MCI is 19.7% [ 9 ]. Once diagnosed, the prognosis for MCI is variable, whereby the impairment may be reversible; the rate of decline may plateau; or it may progressively worsen and, in some cases, may be a prodromal stage to dementia [ 10 - 12 ]. While estimates vary based on sample (community vs clinical), annual rates of conversion from MCI to dementia range from 5% to 24% [ 11 , 12 ], and those who present with multiple domains of cognitive impairment are at higher risk of conversion [ 5 ].

The risk of developing MCI rises with age, and while there are no drug treatments for MCI, nonpharmacologic interventions may improve cognitive function, alleviate the burden on caregivers, and potentially delay institutionalization should MCI progress to dementia [ 13 ]. To overcome the challenges of early diagnosis, which currently depends on self-detection, family observation, or health care provider (HCP) recognition of symptoms, screening high-risk groups for MCI or dementia is suggested as a solution [ 13 ]. However, the Canadian Task Force on Preventive Health Care recommends against screening adults aged ≥65 years due to a lack of meaningful evidence from randomized controlled trials and the high false-positive rate [ 14 - 16 ]. The main objective of a screening test is to reduce morbidity or mortality in at-risk populations through early detection and intervention, with the anticipated benefits outweighing potential harms. Using brief screening tools in primary care might improve MCI case detection, allowing patients and families to address reversible causes, make lifestyle changes, and access disease-modifying treatments [ 17 ].

There is no agreement among experts as to which tests or groups of tests are most predictive of MCI [ 16 ], and the gold standard approach uses a combination of positive results from neuropsychological assessments, laboratory tests, and neuroimaging to infer a diagnosis [ 8 , 18 ]. The clinical heterogeneity of MCI complicates its diagnosis because it influences not only memory and thinking abilities but also mood, behavior, emotional regulation, and sensorimotor abilities, and patients may present with any combination of symptoms with varying rates of onset and decline [ 4 , 8 ]. For this reason, a collaborative approach between general practitioners and specialists (eg, geriatricians and neurologists) is often required to be confident in the diagnosis of MCI [ 8 , 19 , 20 ].

In Canada, diagnosis often begins with screening for cognitive impairment followed by referral for additional testing; this process takes, on average, 5 months [ 20 ]. The current usual practice screening tools for MCI are the Mini-Mental State Examination (MMSE) [ 21 , 22 ] and the Montreal Cognitive Assessment (MoCA) 8.1 [ 3 ]. Both are paper-and-pencil screens administered in 10 to 15 minutes, scored out of 30, and validated as MCI screening tools across diverse clinical samples [ 23 , 24 ]. Universally, the MMSE is most often used to screen for MCI [ 20 , 25 ] and consists of 20 items that measure orientation, immediate and delayed recall, attention and calculation, visual-spatial skills, verbal fluency, and writing. The MoCA 8.1 was developed to improve on the MMSE’s ability to detect early signs of MCI, placing greater emphasis on evaluating executive function as well as language, memory, visual-spatial skills, abstraction, attention, concentration, and orientation across 30 items [ 24 , 26 ]. Scores of <24 on the MMSE or ≤25 on the MoCA 8.1 signal probable MCI [ 21 , 27 ]. Lower cutoff scores for both screens have been recommended to address evidence that they lack specificity to detect mild and early cases of MCI [ 4 , 28 - 31 ]. The clinical efficacy of both screens for tracking change in cognition over time is limited as they are also subject to practice effects with repeated administration [ 32 ].

Novel screening tools, including the Quick Mild Cognitive Impairment (Qmci) screen, have been developed with the goal of improving the accuracy of detecting MCI [ 33 , 34 ]. The Qmci is a sensitive and specific tool that differentiates normal cognition from MCI and dementia and is more accurate at differentiating MCI from controls than either the MoCA 8.1 (Qmci area under the curve=0.97 vs MoCA 8.1 area under the curve=0.92) [ 25 , 35 ] or the Short MMSE [ 33 , 36 ]. It also demonstrates high test-retest reliability (intraclass correlation coefficient [ICC]=0.88) [ 37 ] and is clinically useful as a rapid screen for MCI as the Qmci mean is 4.5 (SD 1.3) minutes versus 9.5 (SD 2.8) minutes for the MoCA 8.1 [ 25 ].

The COVID-19 pandemic and the necessary shift to virtual health care accelerated the use of digital assessment tools, including MCI screening tools such as the electronic MoCA 8.1 [ 38 , 39 ], and the increased use and adoption of technology (eg, smartphones and tablets) by older adults suggests that a lack of proficiency with technology may not be a barrier to the use of such assessment tools [ 40 , 41 ]. BrainFx is a for-profit firm that creates proprietary software designed to assess cognition and changes in neurofunction that may be caused by neurodegenerative diseases (eg, MCI or dementia), stroke, concussions, or mental illness using ecologically relevant tasks (eg, prioritizing daily schedules and route finding on a map) [ 42 ]. Their assessments are administered via a tablet and stylus. The BrainFx 360 performance assessment (referred to hereafter as the 360) is a 90-minute digitally administered test that was designed to assess cognitive, physical, and psychosocial areas of neurofunction across 26 cognitive domains using 49 tasks that are timed and scored [ 42 ]. The BrainFx SCREEN (referred to hereafter as the SCREEN) is a short digital version of the 360 that includes 7 of the cognitive domains included in the 360, is estimated to take approximately 10 to 15 minutes to complete, and was designed to screen for early detection of cognitive impairment [ 43 , 44 ]. Upon completion of any BrainFx assessment, the results of the 360 or SCREEN are added to the BrainFx Living Brain Bank (LBB), which is an electronic database that stores all completed 360 and SCREEN assessments and is maintained by BrainFx. An electronic report is generated by BrainFx comparing an individual’s results to those of others collected and stored in the LBB. Normative data from the LBB are used to evaluate and compare an individual’s results.

The 360 has been used in clinical settings to assess neurofunction among youth [ 45 ] and anecdotally in other rehabilitation settings (T Milner, personal communication, May 2018). To date, research on the 360 indicates that it has been validated in healthy young adults (mean age 22.9, SD 2.4 years) and that the overall test-retest reliability of the tool is high (ICC=0.85) [ 42 ]. However, only 2 of the 7 tasks selected to be included in the SCREEN produced reliability coefficients of >0.70 (visual-spatial and problem-solving abilities) [ 42 ]. Jones et al [ 43 ] explored the acceptability and perceived usability of the SCREEN with a small sample (N=21) of Canadian Armed Forces veterans living with posttraumatic stress disorder. A structural equation model based on the Unified Theory of Acceptance and Use of Technology suggested that behavioral intent to use the SCREEN was predicted by facilitating conditions such as guidance during the test and appropriate resources to complete the test [ 43 ]. However, the validity, reliability, and sensitivity of the SCREEN for detecting cognitive impairment have not been tested.

McMurray et al [ 44 ] designed a protocol to assess the validity, reliability, and sensitivity of the SCREEN for detecting early signs of MCI in asymptomatic adults aged ≥55 years in a primary care setting (5 Family Health Teams [FHTs]). The protocol also used a series of semistructured interviews and surveys guided by the fit between individuals, task, technology, and environment framework [ 46 ], a health-specific model derived from the Task-Technology Fit model by Goodhue and Thompson [ 47 ], to explore the SCREEN’s acceptability and use by HCPs and patients in primary care settings (manuscript in preparation). This study is a psychometric evaluation of the SCREEN’s validity, reliability, and sensitivity for detecting MCI in asymptomatic adults aged ≥55 years in primary care settings.

Study Location, Design, and Data Collection

This was a concurrent, mixed methods, prospective study using a quasi-experimental design. Participants were recruited from 5 primary care FHTs (characterized by multidisciplinary practice and capitated funding) across southwestern Ontario, Canada. FHTs that used a registered occupational therapist on staff were eligible to participate in the study, and participating FHTs received a nominal compensatory payment for the time the HCPs spent in training; collecting data for the study; administering the SCREEN, Qmci, and Geriatric Anxiety Scale–10 (GAS-10); and communicating with the research team. A multipronged recruitment approach was used [ 44 ]. A designated occupational therapist at each location was provided with training and equipment to recruit participants, administer assessment tools, and submit collected data to the research team.

The research protocol describing the methods of both the quantitative and qualitative arms of the study is published elsewhere [ 44 ].

Ethical Considerations

This study was approved by the Wilfrid Laurier University Research Ethics Board (ORE 5820) and was reviewed and approved by each FHT. Participants (HCPs, patients, and administrative executives) read and signed an information and informed consent package in advance of taking part in the study. We complied with recommendations for obtaining informed consent and conducting qualitative interviews with persons with dementia when recruiting patients who may be affected by neurocognitive diseases [ 48 - 50 ]. In addition, at the end of each SCREEN assessment, patients were required to provide their consent (electronic signature) to contribute their anonymized scores to the database of SCREEN results maintained by BrainFx. Upon enrolling in the study, participants were assigned a unique identification number that was used in place of their name on all study documentation to anonymize the data and preserve their confidentiality. A master list matching participant names with their unique identification number was stored in a password-protected file by the administering HCP and principal investigator on the research team. The FHTs received a nominal compensatory payment to account for their HCPs’ time spent administering the SCREEN, collecting data for the study, and communicating with the research team. However, the individual HCPs who volunteered to participate and the patient participants were not financially compensated for taking part in the study.

Participants

Patients who were rostered with the FHT, were aged ≥55 years, and had no history of MCI or dementia diagnoses to better capture the population at risk of early signs of cognitive impairment were eligible to participate [ 51 , 52 ]. It was necessary for the participants to be rostered with the FHTs to ensure that the HCPs could access their electronic medical record to confirm eligibility and record the testing sessions and results and to ensure that there was a responsible physician for referral if indicated. As the SCREEN is administered using a tablet, participants had to be able to read and think in English and discern color, have adequate hearing and vision to interact with the administering HCP, read 12-point font on the tablet, and have adequate hand and arm function to manipulate and hold the tablet. The exclusion criteria used in the study included colorblindness and any disability that might impair the individual’s ability to hold and interact with the tablet. Prospective participants were also excluded based on a diagnosis of conditions that may result in MCI or dementia-like symptoms, including major depression that required hospitalization, psychiatric disorders (eg, schizophrenia and bipolar disorder), psychopathology, epilepsy, substance use disorders, or sleep apnea (without the use of a continuous positive airway pressure machine) [ 52 ]. Patients were required to complete a minimum of 2 screening sessions spaced 3 months apart to participate in the study and, depending on when they enrolled to participate, could complete a maximum of 4 screening sessions over a year.

Data Collection Instruments

Gas-10 instrument.

A standardized protocol was used to collect demographic data, randomly administer the SCREEN and the Qmci (a validated screening tool for MCI), and administer the GAS-10 immediately before and after the completion of the first MCI screen at each visit [ 44 ]. This was to assess participants’ general anxiety as it related to screening for cognitive impairment at the time of the assessment, any change in subjective ratings after completion of the first MCI screen, and change in anxiety between appointments. The GAS-10 is a 10-item, self-report screen for anxiety in older adults [ 53 ] developed for rapid screening of anxiety in clinical settings (the GAS-10 is the short form of the full 30-item Geriatric Anxiety Scale [GAS]) [ 54 ]. While 3 subscales are identified, the GAS is reported to be a unidimensional scale that assesses general anxiety [ 55 , 56 ]. Validation of the GAS-10 suggests that it is optimal for assessing average to moderate levels of anxiety in older adults, with subscale scores that are highly and positively correlated with the GAS and high internal consistency [ 53 ]. Participants were asked to use a 4-point Likert scale (0= not at all , 1= sometimes , 2= most of the time , and 3= all of the time ) to rate how often they had experienced each symptom over the previous week, including on the day the test was administered [ 54 ]. The GAS-10 has a maximum score of 30, with higher scores indicating higher levels of anxiety [ 53 , 54 , 57 ].

HCPs completed the required training to become certified BrainFx SCREEN administrators before the start of the study. To this end, HCPs completed a web-based training program (developed and administered through the BrainFx website) that included 3 self-directed training modules. For the purpose of the study, they also participated in 1 half-day in-person training session conducted by a certified BrainFx administrator (T Milner, BrainFx chief executive officer) at one of the participating FHT locations. The SCREEN (version 0.5; beta) was administered on a tablet (ASUS ZenPad 10.1” IPS WXGA display, 1920 × 1200, powered by a quad-core 1.5 GHz, 64-bit MediaTek MTK 8163A processor with 2 GB RAM and 16-GB storage). The tablet came with a tablet stand for optional use and a dedicated stylus that is recommended for completion of a subset of questions. At the start of the study, HCPs were provided with identical tablets preloaded with the SCREEN software for use in the study. The 7 tasks on the SCREEN are summarized in Table 1 and were taken directly from the 360 based on a clustering and regression analysis of LBB records in 2016 (N=188) [ 58 ]. A detailed description of the study and SCREEN administration procedures was published by McMurray et al [ 44 ].

An activity score is generated for each of the 7 tasks on the SCREEN. It is computed based on a combination of the accuracy of the participant’s response and the processing speed (time in seconds) that it takes to complete the task. The relative contribution of accuracy and processing speed to the final activity score for each task is proprietary to BrainFx and unknown to the research team. The participant’s activity score is compared to the mean activity score for the same task at the time of testing in the LBB. The mean activity score from the LBB may be based on the global reference population (ie, all available SCREEN results in the LBB), or the administering HCP may select a specific reference population by filtering according to factors including but not limited to age, sex, or diagnosis. If the participant’s activity score is >1 SD below the LBB activity score mean for that task, it is labeled as an area of challenge . Each of the 7 tasks on the SCREEN are evaluated independently of each other, producing a report with 7 activity scores showing the participant’s score, the LBB mean score, and the SD. The report also provides an overall performance and processing speed score. The overall performance score is an average of all 7 activity scores; however, the way in which the overall processing speed score is generated remains proprietary to BrainFx and unknown to the research team. Both the overall performance and processing speed scores are similarly evaluated against the LBB and identified as an area of challenge using the criteria described previously. For the purpose of this study, participants’ mean activity scores on the SCREEN were compared to the results of people aged ≥55 years in the LBB.

The Qmci evaluated 6 cognitive domains: orientation (10 points), registration (5 points), clock drawing (15 points), delayed recall (20 points), verbal fluency (20 points), and logical memory (30 points) [ 59 ]. Administering HCPs scored the text manually, with each subtest’s points contributing to the overall score out of 100 points, and the cutoff score to distinguish normal cognition from MCI was ≤67/100 [ 60 ]. Cutoffs to account for age and education have been validated and are recommended as the Qmci is sensitive to these factors [ 60 ]. A 2019 meta-analysis of the diagnostic accuracy of MCI screening tools reported that the sensitivity and specificity of the Qmci for distinguishing MCI from normal cognition is similar to usual standard-of-care tools (eg, the MoCA, Addenbrooke Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease battery total score, and Sunderland Clock Drawing Test) [ 61 ]. The Qmci has also been translated into >15 different languages and has undergone psychometric evaluation across a subset of these languages. While not as broadly adopted as the MoCA 8.1 in Canada, its psychometric properties, administration time, and availability for use suggested that the Qmci was an optimal assessment tool for MCI screening in FHT settings during the study.

Psychometric Evaluation

To date, the only published psychometric evaluation of any BrainFx tool is by Searles et al [ 42 ] in Athletic Training & Sports Health Care ; it assessed the test-retest reliability of the 360 in 15 healthy adults between the ages of 20 and 25 years. This study evaluated the psychometric properties of the SCREEN and included a statistical analysis of the tool’s internal consistency, construct validity, test-retest reliability, and sensitivity and specificity. McMurray et al [ 44 ] provide a detailed description of the data collection procedures for administration of the SCREEN and Qmci completed by participants at each visit.

Validity Testing

Face validity was outside the scope of this study but was implied, and assumptions are reported in the Results section. Construct validity, whether the 7 activities that make up the SCREEN were representative of MCI, was assessed through comparison with a substantive body of literature in the domain and through principal component analysis using varimax rotation. Criterion validity measures how closely the SCREEN results corresponded to the results of the Qmci (used here as an “imperfect gold standard” for identifying MCI in older adults) [ 62 ]. A BrainFx representative hypothesized that the ecological validity of the SCREEN questions (ie, using tasks that reflect real-world activities to detect early signs of cognitive impairment) [ 63 ] makes it a more sensitive tool than other screens (T Milner, personal communication, May 2018) and allows HCPs to equate activity scores on the SCREEN with real-world functional abilities. Criterion validity was explored first using cross-tabulations to calculate the sensitivity and specificity of the SCREEN compared to those of the Qmci. Conventional screens such as the Qmci are scored by taking the sum of correct responses on the screen and a cutoff score derived from normative data to distinguish normal cognition from MCI. The SCREEN used a different method of scoring whereby each of the 7 tasks was scored and evaluated independently of each other and there were no recommended guidelines for distinguishing normal cognition from MCI based on the aggregate areas of challenge identified by the SCREEN. Therefore, to compare the sensitivity and specificity of the SCREEN against those of the Qmci, the results of both screens were coded into a binary format as 1=healthy and 2=unhealthy, where healthy denoted no areas of challenge identified through the SCREEN and a Qmci score of ≥67. Conversely, unhealthy denoted one or more areas of challenge identified through the SCREEN and a Qmci score of <67.

Criterion validity was further explored using discrepant analysis via a resolver test [ 44 ]. Following the administration of the SCREEN and Qmci, screen results were evaluated by the administering HCP. HCPs were instructed to refer the participant for follow-up with their primary care physician if the Qmci result was <67 regardless of whether any areas of challenge were identified on the SCREEN. However, HCPs could use their clinical judgment to refer a participant for physician follow-up based on the results of the SCREEN or the Qmci, and all the referral decisions were charted on the participant’s electronic medical record following each visit and screening. In discrepant analysis, the results of the imperfect gold standard [ 64 ], as was the role of the Qmci in this study, were compared with the SCREEN results. A resolver test (classified as whether the HCP referred the patient to a physician for follow-up based on their performance on the SCREEN and the Qmci) was used on discordant results [ 64 , 65 ] to determine sensitivity and specificity. To this end, a new variable, Referral to a Physician for Cognitive Impairment , was coded as the true status (1=no referral; 2=referral was made) and compared to the Qmci as the imperfect gold standard (1=healthy; 2=unhealthy).

Reliability Testing

The reliability of a screening instrument is its ability to consistently measure an attribute and how well its component measures fit together conceptually. Internal consistency identifies whether the items in a multi-item scale are measuring the same underlying construct; the internal consistency of the SCREEN was assessed using the Cronbach α. Test-retest reliability refers to the ability of a measurement instrument to reproduce results over ≥2 occasions (assuming the underlying conditions have not changed) and was assessed using paired t tests (2-tailed), ICC, and the κ coefficient. In this study, participants completed both the SCREEN and the Qmci in the same sitting in a random sequence on at least 2 different occasions spaced 3 months apart (administration procedures are described elsewhere) [ 44 ]. In some instances, the screens were administered to the same participant on 4 separate occasions spaced 3 months apart each, and this provided up to 3 separate opportunities to conduct test-retest reliability analyses and investigate the effects of repeated practice. There are no clear guidelines on the optimal time between tests [ 66 , 67 ]; however, Streiner and Kottner [ 68 ] and Streiner [ 69 ] recommend longer periods between tests (eg, at least 10-14 days) to avoid recall bias, and greater practice effects have been experienced with shorter test-retest intervals [ 32 ].

Analysis of the quantitative data was completed using Stata (version 17.0; StataCorp). Assumptions of normality were not violated, so parametric tests were used. Collected data were reported using frequencies and percentages and compared using the chi-square or Fisher exact test as necessary. Continuous data were analyzed for central tendency and variability; categoric data were presented as proportions. Normality was tested using the Shapiro-Wilk test, and nonparametric data were tested using the Mann-Whitney U test. A P value of .05 was considered statistically significant, with 95% CIs provided where appropriate. We powered the exploratory analysis to validate the SCREEN using an estimated effect size of 12%—understanding that Canadian prevalence rates of MCI were not available [ 1 ]—and determined that the study required at least 162 participants. For test-retest reliability, using 90% power and a 5% type-I error rate, a minimum of 67 test results was required.

The time taken for participants to complete the SCREEN was recorded by the HCPs at the time of testing; there were 6 missing HCP records of time to complete the SCREEN. For these 6 cases of missing data, we imputed the mean time to complete the SCREEN by all participants who were tested by that HCP and used this to populate the missing cells [ 70 ]. There were 3 cases of missing data related to the SCREEN reports. More specifically, the SCREEN report generated by BrainFx did not include 1 or 2 data points each for the route finding, divided attention, and prioritizing tasks. The clinical notes provided by the HCP at the time of SCREEN administration did not indicate that the participant had not completed those questions, and it was not possible to determine the root cause of the missing data in report generation according to BrainFx (M Milner, personal communication, July 7, 2020). For continuous variables in analyses such as exploratory factor analysis, Cronbach α, and t test, missing values were imputed using the mean. However, for the coded healthy and unhealthy categorical variables, values were not imputed.

Data collection began in January 2019 and was to conclude on May 31, 2020. However, the emergence of the global COVID-19 pandemic resulted in the FHTs and Wilfrid Laurier University prohibiting all in-person research starting on March 16, 2020.

Participant Demographics

A total of 154 participants were recruited for the study, and 20 (13%) withdrew following their first visit to the FHT. The data of 65% (13/20) of the participants who withdrew were included in the final analysis, and the data of the remaining 35% (7/20) were removed, either due to their explicit request (3/7, 43%) or because technical issues at the time of testing rendered their data unusable (4/7, 57%). These technical issues were related to software issues (eg, any instance in which the patient or HCP interacted with the SCREEN software and followed the instructions provided, the software did not work as expected [ie, objects did not move where they were dragged or tapping on objects failed to highlight the object], and the question could not be completed). After attrition, a total of 147 individuals aged ≥55 years with no previous diagnosis of MCI or dementia participated in the study ( Table 2 ). Of the 147 participants, 71 (48.3%) took part in only 1 round of screening on visit 1 (due to COVID-19 restrictions imposed on in-person research that prevented a second visit). The remaining 51.7% (76/147) of the participants took part in ≥2 rounds of screening across multiple visits (76/147, 51.7% participated in 2 rounds; 22/147, 15% participated in 3 rounds; and 13/147, 8.8% participated in 4 rounds of screening).

The sample population was 65.3% (96/147) female (mean 70.2, SD 7.9 years) and 34.7% (51/147) male (mean 72.5, SD 8.1 years), with age ranging from 55 to 88 years; 65.3% (96/147) achieved the equivalent of or higher than a college diploma or certificate ( Table 2 ); and 32.7% (48/147) self-reported living with one or more chronic medical conditions ( Table 3 ). At the time of screening, 73.5% (108/147) of participants were also taking medications with side effects that may include impairments to memory and thinking abilities [ 71 - 75 ]; therefore, medication use was accounted for in a subset of the analyses. Finally, 84.4% (124/147) of participants self-reported regularly using technology (eg, smartphone, laptop, or tablet) with high proficiency. A random sequence generator was used to determine the order for administering the MCI screens; the SCREEN was administered first 51.9% (134/258) of the time.

Construct Validity

Construct validity was assessed through a review of relevant peer-reviewed literature that compared constructs included in the SCREEN with those identified in the literature as 2 of the most sensitive tools for MCI screening: the MoCA 8.1 [ 76 ] and the Qmci [ 25 ]. Memory, language, and verbal skills are assessed in the MoCA and Qmci but are absent from the SCREEN. Tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ] but are similarly absent from the SCREEN.

Exploratory factor analysis was performed to examine the SCREEN’s ability to reliably measure risk of MCI. The Kaiser-Meyer-Olkin measure yielded a value of 0.79, exceeding the commonly accepted threshold of 0.70, indicating that the sample was adequate for factor analysis. The Bartlett test of sphericity returned a chi-square value of χ 2 21 =167.1 ( P <.001), confirming the presence of correlations among variables suitable for factor analysis. A principal component analysis revealed 2 components with eigenvalues of >1, cumulatively accounting for 52.12% of the variance, with the first factor alone explaining 37.8%. After the varimax rotation, the 2 factors exhibited distinct patterns of loadings, with the visual-spatial ability factor loading predominantly on the second factor. The SCREEN tasks, except for visual-spatial ability, loaded substantially on the factors (>0.5), suggesting that the SCREEN possesses good convergent validity for assessing the risk of MCI.

Criterion Validity

The coding of SCREEN scores into a binary healthy and unhealthy outcome standardized the dependent variable to allow for criterion testing. Criterion validity was assessed using cross-tabulations and the analysis of confusion matrices and provided insights into the sensitivity and specificity of the SCREEN when compared to the Qmci. Of the 144 cases considered, 20 (13.9%) were true negatives, and 74 (51.4%) were true positives. The SCREEN’s sensitivity, which reflects its capacity to accurately identify healthy individuals (true positives), was 63.25% (74 correct identifications/117 actual positives). The specificity of the test, indicating its ability to accurately identify unhealthy individuals (true negatives), was 74.07% (20 correct identifications/27 actual negatives). Then, sensitivity and specificity were derived using discrepant analysis and a resolver test previously described (whether the HCP referred the participant to a physician following the screens). The results were identical, the estimate of the SCREEN sensitivity was 63.3% (74/117), and the estimate of the specificity was 74% (20/27).

Internal Reliability

A Cronbach α=0.70 is acceptable, and at least 0.90 is required for clinical instruments [ 79 ]. The estimate of internal consistency for the SCREEN (N=147) was Cronbach α=0.63.

Test-Retest Reliability

Test-retest reliability analyses were conducted using ICC for the SCREEN activity scores and the κ coefficient for the healthy and unhealthy classifications. Guidelines for interpretation of the ICC suggest that anything <0.5 indicates poor reliability and anything between 0.5 and 0.75 suggests moderate reliability [ 80 ]; the ICC for the SCREEN activity scores was 0.54. With respect to the κ coefficient, a κ value of <0.2 is considered to have no level of agreement, a κ value of 0.21 to 0.39 is considered minimal, a κ value of 0.4 to 0.59 is considered weak agreement, and anything >0.8 suggests strong to almost perfect agreement [ 81 ]. The κ coefficient for healthy and unhealthy classifications was 0.15.

Analysis of the Factors Impacting Healthy and Unhealthy Results

The Spearman rank correlation was used to assess the relationships between participants’ overall activity score on the SCREEN and their total time to complete the SCREEN; age, sex, and self-reported levels of education; technology use; medication use; amount of sleep; and level of anxiety (as measured using the GAS-10) at the time of SCREEN administration. Lower overall activity scores were moderately correlated with being older ( r s142 =–0.57; P <.001) and increased total time to complete the SCREEN ( r s142 =0.49; P <.001). There was also a moderate inverse relationship between overall activity score and total time to compete the SCREEN ( r s142 =–0.67; P <.001) whereby better performance was associated with quicker task completion. There were weak positive associations between overall activity score and increased technology use ( r s142 =0.34; P <.001) and higher level of education ( r s142 =0.21; P =.01).

A logistic regression model was used to predict the SCREEN result using data from 144 observations. The model’s predictors explain approximately 21.33% of the variance in the outcome variable. The likelihood ratio test indicates that the model provides a significantly better fit to the data than a model without predictors ( P <.001).

The SCREEN outcome variable ( healthy vs unhealthy ) was associated with the predictor variables sex and total time to complete the SCREEN. More specifically, female participants were more likely to obtain healthy SCREEN outcomes ( P =.007; 95% CI 0.32-2.05). For all participants, the longer it took to complete the SCREEN, the less likely they were to achieve a healthy SCREEN outcome ( P =.002; 95% CI –0.33 to –0.07). Age ( P =.25; 95% CI –0.09 to 0.02), medication use ( P =.96; 95% CI –0.9 to 0.94), technology use ( P =.44; 95% CI –0.28 to 0.65), level of education ( P =.14; 95% CI –0.09 to 0.64), level of anxiety ( P =.26; 95% CI –1.13 to 0.3), and hours of sleep ( P =.08; 95% CI –0.06 to 0.93) were not significant.

Impact of Practice Effects

The SCREEN was administered approximately 3 months apart, and separate, paired-sample t tests were performed to compare SCREEN outcomes between visits 1 and 2 (76/147, 51.7%; Table 4 ), visits 2 and 3 (22/147, 15%), and visits 3 and 4 (13/147, 8.8%). Declining visits were partially attributable to the early shutdown of data collection due to the COVID-19 pandemic, and therefore, comparisons between visits 2 and 3 or visits 3 and 4 were not reported. Compared to participants’ SCREEN performance on visit 1, their overall mean activity score and overall processing time improved on their second administration of the SCREEN (score: t 75 =–2.86 and P =.005; processing time: t 75 =–2.98 and P =.004). Even though the 7 task-specific activity scores on the SCREEN also increased between visits 1 and 2, these improvements were not significant, indicating that the difference in overall activity scores was cumulative and not attributable to a specific task ( Table 4 ).

Principal Findings

Our study aimed to evaluate the effectiveness and reliability of the BrainFx SCREEN in detecting MCI in primary care settings. The research took place during the COVID-19 pandemic, which influenced the study’s execution and timeline. Despite these challenges, the findings offer valuable insights into cognitive impairment screening.

Brief MCI screening tools help time-strapped primary care physicians determine whether referral for a definitive battery of more time-consuming and expensive tests is warranted. These tools must optimize and balance the need for time efficiency while also being psychometrically valid and easily administered [ 82 ]. The importance of brevity is determined by a number of factors, including the clinical setting. Screens that can be completed in approximately ≤5 minutes [ 13 ] are recommended for faster-paced clinical settings (eg, emergency rooms and preoperative screens), whereas those that can be completed in 5 to 10 minutes or less are better suited to primary care settings [ 82 - 84 ]. Identifying affordable, psychometrically tested screening tests for MCI that integrate into clinical workflows and are easy to consistently administer and complete may help with the following:

  • Initiating appropriate diagnostic tests for signs and symptoms at an earlier stage
  • Normalizing and destigmatizing cognitive testing for older adults
  • Expediting referrals
  • Allowing for timely access to programs and services that can support aging in place or delay institutionalization
  • Reducing risk
  • Improving the psychosocial well-being of patients and their care partners by increasing access to information and resources that aid with future planning and decision-making [ 85 , 86 ]

Various cognitive tests are commonly used for detecting MCI. These include the Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, Sunderland Clock Drawing Test, Informant Questionnaire on Cognitive Decline in the Elderly, Memory Alternation Test, MMSE, MoCA 8.1, and Qmci [ 61 , 87 ]. The Addenbrook Cognitive Examination–Revised, Consortium to Establish a Registry for Alzheimer’s Disease, MoCA 8.1, Qmci, and Memory Alternation Test are reported to have similar diagnostic accuracy [ 61 , 88 ]. The HCPs participating in this study reported using the MoCA 8.1 as their primary screening tool for MCI along with other assessments such as the MMSE and Trail Making Test parts A and B.

Recent research highlights the growing use of digital tools [ 51 , 89 , 90 ], mobile technology [ 91 , 92 ], virtual reality [ 93 , 94 ], and artificial intelligence [ 95 ] to improve early identification of MCI. Demeyere et al [ 51 ] developed the tablet-based, 10-item Oxford Cognitive Screen–Plus to detect slight changes in cognitive impairment across 5 domains of cognition (memory, attention, number, praxis, and language), which has been validated among neurologically healthy older adults. Statsenko et al [ 96 ] have explored improvement of the predictive capabilities of tests using artificial intelligence. Similarly, there is an emerging focus on the use of machine learning techniques to detect dementia leveraging routinely collected clinical data [ 97 , 98 ]. This progression signifies a shift toward more technologically advanced, efficient, and potentially more accurate diagnostic approaches in the detection of MCI.

Whatever the modality, screening tools should be quick to administer, demonstrate consistent results over time and between different evaluators, cover all major cognitive areas, and be straightforward to both administer and interpret [ 99 ]. However, highly sensitive tests such as those suggested for screening carry a significant risk of false-positive diagnoses [ 15 ]. Given the high potential for harm of false positives, it is important to validate the psychometric properties of screening tests across different populations and understand how factors such as age and education can influence the results [ 99 ].

Our study did not assess the face validity of the SCREEN, but participating occupational therapists were comfortable with the test regimen. Nonetheless, the research team noted the absence of verbal fluency and memory tests in the SCREEN, both of which McDonnell et al [ 100 ] identified as being more sensitive to the more commonly seen amnesic MCI. Two of the most sensitive tools for MCI screening, the MoCA 8.1 [ 76 ] and Qmci [ 25 ], assess memory, language, and verbal skills, and tests of verbal fluency and logical memory have been shown to be particularly sensitive to early cognitive changes [ 77 , 78 ].

The constructs included in the SCREEN ( Table 1 ) were selected based on a single non–peer-reviewed study [ 58 ] using the 360 and traumatic brain injury data (N=188) that identified the constructs as predictive of brain injury. The absence of tasks that measure verbal fluency or logical memory in the SCREEN appears to weaken claims of construct validity. The principal component analysis of the SCREEN assessment identified 2 components accounting for 52.12% of the total variance. The first component was strongly associated with abstract reasoning, constructive ability, and divided attention, whereas the second was primarily influenced by visual-spatial abilities. This indicates that constructs related to perception, attention, and memory are central to the SCREEN scores.

The SCREEN’s binary outcome (healthy or unhealthy) created by the research team was based on comparisons with the Qmci. However, the method of identifying areas of challenge in the SCREEN by comparing the individual’s mean score on each of the 7 tasks with the mean scores of a global or filtered cohort in the LBB introduces potential biases or errors. These could arise from a surge in additions to the LBB from patients with specific characteristics, self-selection of participants, poorly trained SCREEN administrators, inclusion of nonstandard test results, underuse of appropriate filters, and underreporting of clinical conditions or factors such as socioeconomic status that impact performance in standardized cognitive tests.

The proprietary method of analyzing and reporting SCREEN results complicates traditional sensitivity and specificity measurement. Our testing indicated a sensitivity of 63.25% and specificity of 74.07% for identifying healthy (those without MCI) and unhealthy (those with MCI) individuals. The SCREEN’s Cronbach α=.63, slightly below the threshold for clinical instruments, and reliability scores that were lower than the ideal standards suggest a higher-than-acceptable level of random measurement error in its constructs. The lower reliability may also stem from an inadequate sample size or a limited number of scale items.

The SCREEN’s results are less favorable compared to those of other digital MCI screening tools that similarly enable evaluation of specific cognitive domains but also provide validated, norm-referenced cutoff scores and methods for cumulative scoring in clinical settings (Oxford Cognitive Screen–Plus) [ 51 ] or of validated MCI screening tools used in primary care (eg, MoCA 8.1, Qmci, and MMSE) [ 51 , 87 ]. The SCREEN’s unique scoring algorithm and the dynamic denominator in data analysis necessitate caution in comparing these results to those of other tools with fixed scoring algorithms and known sensitivities [ 101 , 102 ]. We found the SCREEN to have lower-than-expected internal reliability, suggesting significant random measurement error. Test-retest reliability was weak for the healthy or unhealthy outcome but stronger for overall activity scores between tests. The variability in identifying areas of challenge could relate to technological difficulties or variability from comparisons with a growing database of test results.

Potential reasons for older adults’ poorer scores on timed tests include the impact of sensorimotor decline on touch screen sensation and reaction time [ 38 , 103 ], anxiety related to taking a computer-enabled test [ 104 - 106 ], or the anticipated consequences of a negative outcome [ 107 ]. However, these effects were unlikely to have influenced the results of this study. Practice effects were observed [ 29 , 108 ], but the SCREEN’s novelty suggests that familiarity is not gained through prepreparation or word of mouth as this sample was self-selected and not randomized. Future research might also explore the impact of digital literacy and cultural differences in the interpretation of software constructs or icons on MCI screening in a randomized, older adult sample.

Limitations

This study had methodological limitations that warrant attention. The small sample size and the demographic distribution of the 147 participants aged ≥55 years, with most (96/147, 65.3%) being female and well educated, limits the generalizability of the findings to different populations. The study’s design, aiming to explore the sensitivity of the SCREEN for early detection of MCI, necessitated the exclusion of individuals with a previous diagnosis of MCI or dementia. This exclusion criterion might have impacted the study’s ability to thoroughly assess the SCREEN’s effectiveness in a more varied clinical context. The requirement for participants to read and comprehend English introduced another limitation to our study. This criterion potentially limited the SCREEN tool’s applicability across diverse linguistic backgrounds as individuals with language-based impairments or those not proficient in English may face challenges in completing the assessment [ 51 ]. Such limitations could impact the generalizability of our findings to non–English-speaking populations or to those with language impairments, underscoring the need for further research to evaluate the SCREEN tool’s effectiveness in broader clinical and linguistic contexts.

Financial constraints played a role in limiting the study’s scope. Due to funding limitations, it was not possible to include specialist assessments and a battery of neuropsychiatric tests generally considered the gold standard to confirm or rule out an MCI diagnosis. Therefore, the study relied on differential verification through 2 imperfect reference standards: a comparison with the Qmci (the tool with the highest published sensitivity to MCI in 2019, when the study was designed) and the clinical judgment of the administering HCP, particularly in decisions regarding referrals for further clinical assessment. Furthermore, while an economic feasibility assessment was considered, the research team determined that it should follow, not precede, an evaluation of the SCREEN’s validity and reliability.

The proprietary nature of the algorithm used for scoring the SCREEN posed another challenge. Without access to this algorithm, the research team had to use a novel comparative statistical approach, coding patient results into a binary variable: healthy (SCREEN=no areas of challenge OR Qmci≥67 out of 100) or unhealthy (SCREEN=one or more areas of challenge OR Qmci<67 out of 100). This may have introduced a higher level of error into our statistical analysis. Furthermore, the process for determining areas of challenge on the SCREEN involves comparing a participant’s result to the existing SCREEN results in the LBB at the time of testing. By the end of this study, the LBB contained 632 SCREEN results for adults aged ≥55 years, with this study contributing 258 of those results. The remaining 366 original SCREEN results, 64% of which were completed by individuals who self-identified as having a preexisting diagnosis or conditions associated with cognitive impairment (eg, traumatic brain injury, concussion, or stroke), could have led to an overestimation of the means and SDs of the study participants’ results at the outset of the study.

Unlike other cognitive screening tools, the SCREEN allows for filtering of results to compare different patient cohorts in the LBB using criteria such as age and education. However, at this stage of the LBB’s development, using such filters can significantly reduce the reliability of the results due to a smaller comparator population (ie, the denominator used to calculate the mean and SD). This, in turn, affects the significance of the results. Moreover, the constantly changing LBB data set makes it challenging to meaningfully compare an individual’s results over time as the evolving denominator affects the accuracy and relevance of these comparisons. Finally, the significant improvement in SCREEN scores between the first and second visits suggests the presence of practice effects, which could have influenced the reliability and validity of the findings.

Conclusions

In a primary care setting, where MCI screening tools are essential and recommended for those with concerns [ 85 ], certain criteria are paramount: time efficiency, ease of administration, and robust psychometric properties [ 82 ]. Our analysis of the BrainFx SCREEN suggests that, despite its innovative approach and digital delivery, it currently falls short in meeting these criteria. The SCREEN’s comparatively longer administration time and lower-than-expected reliability scores suggest that it may not be the most effective tool for MCI screening of older adults in a primary care setting at this time.

It is important to note that, in the wake of the COVID-19 pandemic, and with an aging population living and aging by design or necessity in a community setting, there is growing interest in digital solutions, including web-based applications and platforms to both collect digital biomarkers and deliver cognitive training and other interventions [ 109 , 110 ]. However, new normative standards are required when adapting cognitive tests to digital formats [ 92 ] as the change in medium can significantly impact test performance and results interpretation. Therefore, we recommend caution when interpreting our study results and encourage continued research and refinement of tools such as the SCREEN. This ongoing process will ensure that current and future MCI screening tools are effective, reliable, and relevant in meeting the needs of our aging population, particularly in primary care settings where early detection and intervention are key.

Acknowledgments

The researchers gratefully acknowledge the Ontario Centres of Excellence Health Technologies Fund for their financial support of this study; the executive directors and clinical leads in each of the Family Health Team study locations; the participants and their friends and families who took part in the study; and research assistants Sharmin Sharker, Kelly Zhu, and Muhammad Umair for their contributions to data management and statistical analysis.

Data Availability

The data sets generated during and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

JM contributed to the conceptualization, methodology, validation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, supervision, and funding acquisition. AML contributed to the conceptualization, methodology, validation, investigation, formal analysis, data curation, writing—original draft, writing—review and editing, visualization, and project administration. WP contributed to the validation, formal analysis, data curation, writing—original draft, writing—review and editing, and visualization. Finally, PH contributed to conceptualization, methodology, writing—review and editing, supervision, and funding acquisition.

Conflicts of Interest

None declared.

  • Casagrande M, Marselli G, Agostini F, Forte G, Favieri F, Guarino A. The complex burden of determining prevalence rates of mild cognitive impairment: a systematic review. Front Psychiatry. 2022;13:960648. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Petersen RC, Caracciolo B, Brayne C, Gauthier S, Jelic V, Fratiglioni L. Mild cognitive impairment: a concept in evolution. J Intern Med. Mar 2014;275(3):214-228. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Knopman DS, Petersen RC. Mild cognitive impairment and mild dementia: a clinical perspective. Mayo Clin Proc. Oct 2014;89(10):1452-1459. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Anderson ND. State of the science on mild cognitive impairment (MCI). CNS Spectr. Feb 2019;24(1):78-87. [ CrossRef ] [ Medline ]
  • Tangalos EG, Petersen RC. Mild cognitive impairment in geriatrics. Clin Geriatr Med. Nov 2018;34(4):563-589. [ CrossRef ] [ Medline ]
  • Ng R, Maxwell C, Yates E, Nylen K, Antflick J, Jette N, et al. Brain disorders in Ontario: prevalence, incidence and costs from health administrative data. Institute for Clinical Evaluative Sciences. 2015. URL: https:/​/www.​ices.on.ca/​publications/​research-reports/​brain-disorders-in-ontario-prevalence-incidence-and-costs-from-health-administrative-data/​ [accessed 2024-04-01]
  • Centers for Disease ControlPrevention (CDC). Self-reported increased confusion or memory loss and associated functional difficulties among adults aged ≥ 60 years - 21 states, 2011. MMWR Morb Mortal Wkly Rep. May 10, 2013;62(18):347-350. [ FREE Full text ] [ Medline ]
  • Petersen RC, Lopez O, Armstrong MJ, Getchius TS, Ganguli M, Gloss D, et al. Practice guideline update summary: mild cognitive impairment: report of the guideline development, dissemination, and implementation subcommittee of the American Academy of Neurology. Neurology. Jan 16, 2018;90(3):126-135. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Song WX, Wu WW, Zhao YY, Xu HL, Chen GC, Jin SY, et al. Evidence from a meta-analysis and systematic review reveals the global prevalence of mild cognitive impairment. Front Aging Neurosci. 2023;15:1227112. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen Y, Denny KG, Harvey D, Farias ST, Mungas D, DeCarli C, et al. Progression from normal cognition to mild cognitive impairment in a diverse clinic-based and community-based elderly cohort. Alzheimers Dement. Apr 2017;13(4):399-405. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Langa KM, Levine DA. The diagnosis and management of mild cognitive impairment: a clinical review. JAMA. Dec 17, 2014;312(23):2551-2561. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang Y, Natale G, Clouston S. Incidence of mild cognitive impairment, conversion to probable dementia, and mortality. Am J Alzheimers Dis Other Demen. 2021;36:15333175211012235. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Prince M, Bryce R, Ferri CP. World Alzheimer report 2011: the benefits of early diagnosis and intervention. Alzheimer’s Disease International. 2011. URL: https://www.alzint.org/u/WorldAlzheimerReport2011.pdf [accessed 2024-04-01]
  • Patnode CD, Perdue LA, Rossom RC, Rushkin MC, Redmond N, Thomas RG, et al. Screening for cognitive impairment in older adults: updated evidence report and systematic review for the US preventive services task force. JAMA. Feb 25, 2020;323(8):764-785. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Canadian Task Force on Preventive Health Care, Pottie K, Rahal R, Jaramillo A, Birtwhistle R, Thombs BD, et al. Recommendations on screening for cognitive impairment in older adults. CMAJ. Jan 05, 2016;188(1):37-46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tahami Monfared AA, Phan NT, Pearson I, Mauskopf J, Cho M, Zhang Q, et al. A systematic review of clinical practice guidelines for Alzheimer's disease and strategies for future advancements. Neurol Ther. Aug 2023;12(4):1257-1284. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mattke S, Jun H, Chen E, Liu Y, Becker A, Wallick C. Expected and diagnosed rates of mild cognitive impairment and dementia in the U.S. medicare population: observational analysis. Alzheimers Res Ther. Jul 22, 2023;15(1):128. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Manly JJ, Tang MX, Schupf N, Stern Y, Vonsattel JP, Mayeux R. Frequency and course of mild cognitive impairment in a multiethnic community. Ann Neurol. Apr 2008;63(4):494-506. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Black CM, Ambegaonkar BM, Pike J, Jones E, Husbands J, Khandker RK. The diagnostic pathway from cognitive impairment to dementia in Japan: quantification using real-world data. Alzheimer Dis Assoc Disord. 2019;33(4):346-353. [ CrossRef ] [ Medline ]
  • Ritchie CW, Black CM, Khandker RK, Wood R, Jones E, Hu X, et al. Quantifying the diagnostic pathway for patients with cognitive impairment: real-world data from seven European and north American countries. J Alzheimers Dis. 2018;62(1):457-466. [ CrossRef ] [ Medline ]
  • Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. Nov 1975;12(3):189-198. [ CrossRef ] [ Medline ]
  • Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC. Cognitive tests to detect dementia: a systematic review and meta-analysis. JAMA Intern Med. Sep 2015;175(9):1450-1458. [ CrossRef ] [ Medline ]
  • Lopez MN, Charter RA, Mostafavi B, Nibut LP, Smith WE. Psychometric properties of the Folstein mini-mental state examination. Assessment. Jun 2005;12(2):137-144. [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal cognitive assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. Apr 2005;53(4):695-699. [ CrossRef ] [ Medline ]
  • O'Caoimh R, Timmons S, Molloy DW. Screening for mild cognitive impairment: comparison of "MCI specific" screening instruments. J Alzheimers Dis. 2016;51(2):619-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ, Alzheimer’s Disease Neuroimaging Initiative. Relationship between the Montreal cognitive assessment and mini-mental state examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. Sep 07, 2015;15:107. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Phillips N, Chertkow H. Normative data for the Montreal Cognitive Assessment (MoCA) in a population-based sample. Neurology. Mar 06, 2012;78(10):765-766. [ CrossRef ] [ Medline ]
  • Monroe T, Carter M. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research. Eur J Ageing. Sep 2012;9(3):265-274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Damian AM, Jacobson SA, Hentz JG, Belden CM, Shill HA, Sabbagh MN, et al. The Montreal cognitive assessment and the mini-mental state examination as screening instruments for cognitive impairment: item analyses and threshold scores. Dement Geriatr Cogn Disord. 2011;31(2):126-131. [ CrossRef ] [ Medline ]
  • Kaufer DI, Williams CS, Braaten AJ, Gill K, Zimmerman S, Sloane PD. Cognitive screening for dementia and mild cognitive impairment in assisted living: comparison of 3 tests. J Am Med Dir Assoc. Oct 2008;9(8):586-593. [ CrossRef ] [ Medline ]
  • Gagnon C, Saillant K, Olmand M, Gayda M, Nigam A, Bouabdallaoui N, et al. Performances on the Montreal cognitive assessment along the cardiovascular disease continuum. Arch Clin Neuropsychol. Jan 17, 2022;37(1):117-124. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cooley SA, Heaps JM, Bolzenius JD, Salminen LE, Baker LM, Scott SE, et al. Longitudinal change in performance on the Montreal cognitive assessment in older adults. Clin Neuropsychol. 2015;29(6):824-835. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Gao Y, McGlade C, Healy L, Gallagher P, Timmons S, et al. Comparison of the quick mild cognitive impairment (Qmci) screen and the SMMSE in screening for mild cognitive impairment. Age Ageing. Sep 2012;41(5):624-629. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Caoimh R, Molloy DW. Comparing the diagnostic accuracy of two cognitive screening instruments in different dementia subtypes and clinical depression. Diagnostics (Basel). Aug 08, 2019;9(3):93. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Clarnette R, O'Caoimh R, Antony DN, Svendrovski A, Molloy DW. Comparison of the Quick Mild Cognitive Impairment (Qmci) screen to the Montreal Cognitive Assessment (MoCA) in an Australian geriatrics clinic. Int J Geriatr Psychiatry. Jun 2017;32(6):643-649. [ CrossRef ] [ Medline ]
  • Glynn K, Coen R, Lawlor BA. Is the Quick Mild Cognitive Impairment screen (QMCI) more accurate at detecting mild cognitive impairment than existing short cognitive screening tests? A systematic review of the current literature. Int J Geriatr Psychiatry. Dec 2019;34(12):1739-1746. [ CrossRef ] [ Medline ]
  • Lee MT, Chang WY, Jang Y. Psychometric and diagnostic properties of the Taiwan version of the quick mild cognitive impairment screen. PLoS One. 2018;13(12):e0207851. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wallace SE, Donoso Brown EV, Simpson RC, D'Acunto K, Kranjec A, Rodgers M, et al. A comparison of electronic and paper versions of the Montreal cognitive assessment. Alzheimer Dis Assoc Disord. 2019;33(3):272-278. [ CrossRef ] [ Medline ]
  • Gagnon C, Olmand M, Dupuy EG, Besnier F, Vincent T, Grégoire CA, et al. Videoconference version of the Montreal cognitive assessment: normative data for Quebec-French people aged 50 years and older. Aging Clin Exp Res. Jul 2022;34(7):1627-1633. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Friemel TN. The digital divide has grown old: determinants of a digital divide among seniors. New Media & Society. Jun 12, 2014;18(2):313-331. [ CrossRef ]
  • Ventola CL. Mobile devices and apps for health care professionals: uses and benefits. P T. May 2014;39(5):356-364. [ FREE Full text ] [ Medline ]
  • Searles C, Farnsworth JL, Jubenville C, Kang M, Ragan B. Test–retest reliability of the BrainFx 360® performance assessment. Athl Train Sports Health Care. Jul 2019;11(4):183-191. [ CrossRef ]
  • Jones C, Miguel-Cruz A, Brémault-Phillips S. Technology acceptance and usability of the BrainFx SCREEN in Canadian military members and veterans with posttraumatic stress disorder and mild traumatic brain injury: mixed methods UTAUT study. JMIR Rehabil Assist Technol. May 13, 2021;8(2):e26078. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McMurray J, Levy A, Holyoke P. Psychometric evaluation and workflow integration study of a tablet-based tool to detect mild cognitive impairment in older adults: protocol for a mixed methods study. JMIR Res Protoc. May 21, 2021;10(5):e25520. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wilansky P, Eklund JM, Milner T, Kreindler D, Cheung A, Kovacs T, et al. Cognitive behavior therapy for anxious and depressed youth: improving homework adherence through mobile technology. JMIR Res Protoc. Nov 10, 2016;5(4):e209. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. Jan 09, 2006;6:3. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Q. Jun 1995;19(2):213-236. [ CrossRef ]
  • Beuscher L, Grando VT. Challenges in conducting qualitative research with individuals with dementia. Res Gerontol Nurs. Jan 2009;2(1):6-11. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Howe E. Informed consent, participation in research, and the Alzheimer's patient. Innov Clin Neurosci. May 2012;9(5-6):47-51. [ FREE Full text ] [ Medline ]
  • Thorogood A, Mäki-Petäjä-Leinonen A, Brodaty H, Dalpé G, Gastmans C, Gauthier S, et al. Global Alliance for GenomicsHealth‚ AgeingDementia Task Team. Consent recommendations for research and international data sharing involving persons with dementia. Alzheimers Dement. Oct 2018;14(10):1334-1343. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Demeyere N, Haupt M, Webb SS, Strobel L, Milosevich ET, Moore MJ, et al. Introducing the tablet-based Oxford Cognitive Screen-Plus (OCS-Plus) as an assessment tool for subtle cognitive impairments. Sci Rep. Apr 12, 2021;11(1):8000. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nasreddine ZS, Patel BB. Validation of Montreal cognitive assessment, MoCA, alternate French versions. Can J Neurol Sci. Sep 2016;43(5):665-671. [ CrossRef ] [ Medline ]
  • Mueller AE, Segal DL, Gavett B, Marty MA, Yochim B, June A, et al. Geriatric anxiety scale: item response theory analysis, differential item functioning, and creation of a ten-item short form (GAS-10). Int Psychogeriatr. Jul 2015;27(7):1099-1111. [ CrossRef ] [ Medline ]
  • Segal DL, June A, Payne M, Coolidge FL, Yochim B. Development and initial validation of a self-report assessment tool for anxiety among older adults: the Geriatric Anxiety Scale. J Anxiety Disord. Oct 2010;24(7):709-714. [ CrossRef ] [ Medline ]
  • Balsamo M, Cataldi F, Carlucci L, Fairfield B. Assessment of anxiety in older adults: a review of self-report measures. Clin Interv Aging. 2018;13:573-593. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gatti A, Gottschling J, Brugnera A, Adorni R, Zarbo C, Compare A, et al. An investigation of the psychometric properties of the Geriatric Anxiety Scale (GAS) in an Italian sample of community-dwelling older adults. Aging Ment Health. Sep 2018;22(9):1170-1178. [ CrossRef ] [ Medline ]
  • Yochim BP, Mueller AE, June A, Segal DL. Psychometric properties of the Geriatric Anxiety Scale: comparison to the beck anxiety inventory and geriatric anxiety inventory. Clin Gerontol. Dec 06, 2010;34(1):21-33. [ CrossRef ]
  • Recent concussion (< 6 months ago) analysis result. Daisy Intelligence. 2016. URL: https://www.daisyintelligence.com/retail-solutions/ [accessed 2024-04-01]
  • Malloy DW, O'Caoimh R. The Quick Guide: Scoring and Administration Instructions for The Quick Mild Cognitive Impairment (Qmci) Screen. Waterford, Ireland. Newgrange Press; 2017.
  • O'Caoimh R, Gao Y, Svendovski A, Gallagher P, Eustace J, Molloy DW. Comparing approaches to optimize cut-off scores for short cognitive screening instruments in mild cognitive impairment and dementia. J Alzheimers Dis. 2017;57(1):123-133. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Breton A, Casey D, Arnaoutoglou NA. Cognitive tests for the detection of mild cognitive impairment (MCI), the prodromal stage of dementia: meta-analysis of diagnostic accuracy studies. Int J Geriatr Psychiatry. Feb 2019;34(2):233-242. [ CrossRef ] [ Medline ]
  • Umemneku Chikere CM, Wilson K, Graziadio S, Vale L, Allen AJ. Diagnostic test evaluation methodology: a systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard - An update. PLoS One. 2019;14(10):e0223832. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espinosa A, Alegret M, Boada M, Vinyes G, Valero S, Martínez-Lage P, et al. Ecological assessment of executive functions in mild cognitive impairment and mild Alzheimer's disease. J Int Neuropsychol Soc. Sep 2009;15(5):751-757. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hawkins DM, Garrett JA, Stephenson B. Some issues in resolution of diagnostic tests using an imperfect gold standard. Stat Med. Jul 15, 2001;20(13):1987-2001. [ CrossRef ] [ Medline ]
  • Hadgu A, Dendukuri N, Hilden J. Evaluation of nucleic acid amplification tests in the absence of a perfect gold-standard test: a review of the statistical and epidemiologic issues. Epidemiology. Sep 2005;16(5):604-612. [ CrossRef ] [ Medline ]
  • Marx RG, Menezes A, Horovitz L, Jones EC, Warren RF. A comparison of two time intervals for test-retest reliability of health status instruments. J Clin Epidemiol. Aug 2003;56(8):730-735. [ CrossRef ] [ Medline ]
  • Paiva CE, Barroso EM, Carneseca EC, de Pádua Souza C, Dos Santos FT, Mendoza López RV, et al. A critical analysis of test-retest reliability in instrument validation studies of cancer patients under palliative care: a systematic review. BMC Med Res Methodol. Jan 21, 2014;14:8. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. Sep 2014;70(9):1970-1979. [ CrossRef ] [ Medline ]
  • Streiner DL. A checklist for evaluating the usefulness of rating scales. Can J Psychiatry. Mar 1993;38(2):140-148. [ CrossRef ] [ Medline ]
  • Peyre H, Leplège A, Coste J. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey. Qual Life Res. Mar 2011;20(2):287-300. [ CrossRef ] [ Medline ]
  • Nevado-Holgado AJ, Kim CH, Winchester L, Gallacher J, Lovestone S. Commonly prescribed drugs associate with cognitive function: a cross-sectional study in UK Biobank. BMJ Open. Nov 30, 2016;6(11):e012177. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moore AR, O'Keeffe ST. Drug-induced cognitive impairment in the elderly. Drugs Aging. Jul 1999;15(1):15-28. [ CrossRef ] [ Medline ]
  • Rogers J, Wiese BS, Rabheru K. The older brain on drugs: substances that may cause cognitive impairment. Geriatr Aging. 2008;11(5):284-289. [ FREE Full text ]
  • Marvanova M. Drug-induced cognitive impairment: effect of cardiovascular agents. Ment Health Clin. Jul 2016;6(4):201-206. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Espeland MA, Rapp SR, Manson JE, Goveas JS, Shumaker SA, Hayden KM, et al. WHIMSYWHIMS-ECHO Study Groups. Long-term effects on cognitive trajectories of postmenopausal hormone therapy in two age groups. J Gerontol A Biol Sci Med Sci. Jun 01, 2017;72(6):838-845. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Luis CA, Keegan AP, Mullan M. Cross validation of the Montreal cognitive assessment in community dwelling older adults residing in the Southeastern US. Int J Geriatr Psychiatry. Feb 2009;24(2):197-201. [ CrossRef ] [ Medline ]
  • Cunje A, Molloy DW, Standish TI, Lewis DL. Alternate forms of logical memory and verbal fluency tasks for repeated testing in early cognitive changes. Int Psychogeriatr. Feb 2007;19(1):65-75. [ CrossRef ] [ Medline ]
  • Molloy DW, Standish TI, Lewis DL. Screening for mild cognitive impairment: comparing the SMMSE and the ABCS. Can J Psychiatry. Jan 2005;50(1):52-58. [ CrossRef ] [ Medline ]
  • Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th edition. Oxford, UK. Oxford University Press; 2008.
  • Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. Jun 2016;15(2):155-163. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276-282. [ FREE Full text ] [ Medline ]
  • Zhuang L, Yang Y, Gao J. Cognitive assessment tools for mild cognitive impairment screening. J Neurol. May 2021;268(5):1615-1622. [ CrossRef ] [ Medline ]
  • Zhang J, Wang L, Deng X, Fei G, Jin L, Pan X, et al. Five-minute cognitive test as a new quick screening of cognitive impairment in the elderly. Aging Dis. Dec 2019;10(6):1258-1269. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feldman HH, Jacova C, Robillard A, Garcia A, Chow T, Borrie M, et al. Diagnosis and treatment of dementia: 2. Diagnosis. CMAJ. Mar 25, 2008;178(7):825-836. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sabbagh MN, Boada M, Borson S, Chilukuri M, Dubois B, Ingram J, et al. Early detection of mild cognitive impairment (MCI) in primary care. J Prev Alzheimers Dis. 2020;7(3):165-170. [ CrossRef ] [ Medline ]
  • Milne A. Dementia screening and early diagnosis: the case for and against. Health Risk Soc. Mar 05, 2010;12(1):65-76. [ CrossRef ]
  • Screening tools to identify adults with cognitive impairment associated with dementia: diagnostic accuracy. Canadian Agency for Drugs and Technologies in Health. 2014. URL: https:/​/www.​cadth.ca/​sites/​default/​files/​pdf/​htis/​nov-2014/​RB0752%20Cognitive%20Assessments%20for%20Dementia%20Final.​pdf [accessed 2024-04-01]
  • Chehrehnegar N, Nejati V, Shati M, Rashedi V, Lotfi M, Adelirad F, et al. Early detection of cognitive disturbances in mild cognitive impairment: a systematic review of observational studies. Psychogeriatrics. Mar 2020;20(2):212-228. [ CrossRef ] [ Medline ]
  • Chan JY, Yau ST, Kwok TC, Tsoi KK. Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: a systematic review. Ageing Res Rev. Dec 2021;72:101506. [ CrossRef ] [ Medline ]
  • Cubillos C, Rienzo A. Digital cognitive assessment tests for older adults: systematic literature review. JMIR Ment Health. Dec 08, 2023;10:e47487. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen R, Foschini L, Kourtis L, Signorini A, Jankovic F, Pugh M, et al. Developing measures of cognitive impairment in the real world from consumer-grade multimodal sensor streams. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019. Presented at: KDD '19; August 4-8, 2019;2145; Anchorage, AK. URL: https://dl.acm.org/doi/10.1145/3292500.3330690 [ CrossRef ]
  • Koo BM, Vizer LM. Mobile technology for cognitive assessment of older adults: a scoping review. Innov Aging. Jan 2019;3(1):igy038. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zygouris S, Ntovas K, Giakoumis D, Votis K, Doumpoulakis S, Segkouli S, et al. A preliminary study on the feasibility of using a virtual reality cognitive training application for remote detection of mild cognitive impairment. J Alzheimers Dis. 2017;56(2):619-627. [ CrossRef ] [ Medline ]
  • Liu Q, Song H, Yan M, Ding Y, Wang Y, Chen L, et al. Virtual reality technology in the detection of mild cognitive impairment: a systematic review and meta-analysis. Ageing Res Rev. Jun 2023;87:101889. [ CrossRef ] [ Medline ]
  • Fayemiwo MA, Olowookere TA, Olaniyan OO, Ojewumi TO, Oyetade IS, Freeman S, et al. Immediate word recall in cognitive assessment can predict dementia using machine learning techniques. Alzheimers Res Ther. Jun 15, 2023;15(1):111. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Statsenko Y, Meribout S, Habuza T, Almansoori TM, van Gorkom KN, Gelovani JG, et al. Patterns of structure-function association in normal aging and in Alzheimer's disease: screening for mild cognitive impairment and dementia with ML regression and classification models. Front Aging Neurosci. 2022;14:943566. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roebuck-Spencer TM, Glen T, Puente AE, Denney RL, Ruff RM, Hostetter G, et al. Cognitive screening tests versus comprehensive neuropsychological test batteries: a national academy of neuropsychology education paper†. Arch Clin Neuropsychol. Jun 01, 2017;32(4):491-498. [ CrossRef ] [ Medline ]
  • Jammeh EA, Carroll CB, Pearson SW, Escudero J, Anastasiou A, Zhao P, et al. Machine-learning based identification of undiagnosed dementia in primary care: a feasibility study. BJGP Open. Jul 2018;2(2):bjgpopen18X101589. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Riello M, Rusconi E, Treccani B. The role of brief global cognitive tests and neuropsychological expertise in the detection and differential diagnosis of dementia. Front Aging Neurosci. 2021;13:648310. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McDonnell M, Dill L, Panos S, Amano S, Brown W, Giurgius S, et al. Verbal fluency as a screening tool for mild cognitive impairment. Int Psychogeriatr. Sep 2020;32(9):1055-1062. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wojtowicz A, Larner AJ. Diagnostic test accuracy of cognitive screeners in older people. Prog Neurol Psychiatry. Mar 20, 2017;21(1):17-21. [ CrossRef ]
  • Larner AJ. Cognitive screening instruments for the diagnosis of mild cognitive impairment. Prog Neurol Psychiatry. Apr 07, 2016;20(2):21-26. [ CrossRef ]
  • Heintz BD, Keenan KG. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction. PLoS One. 2018;13(2):e0191309. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Laguna K, Babcock RL. Computer anxiety in young and older adults: implications for human-computer interactions in older populations. Comput Human Behav. Aug 1997;13(3):317-326. [ CrossRef ]
  • Wild KV, Mattek NC, Maxwell SA, Dodge HH, Jimison HB, Kaye JA. Computer-related self-efficacy and anxiety in older adults with and without mild cognitive impairment. Alzheimers Dement. Nov 2012;8(6):544-552. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wiechmann D, Ryan AM. Reactions to computerized testing in selection contexts. Int J Sel Assess. Jul 30, 2003;11(2-3):215-229. [ CrossRef ]
  • Gass CS, Curiel RE. Test anxiety in relation to measures of cognitive and intellectual functioning. Arch Clin Neuropsychol. Aug 2011;26(5):396-404. [ CrossRef ] [ Medline ]
  • Barbic D, Kim B, Salehmohamed Q, Kemplin K, Carpenter CR, Barbic SP. Diagnostic accuracy of the Ottawa 3DY and short blessed test to detect cognitive dysfunction in geriatric patients presenting to the emergency department. BMJ Open. Mar 16, 2018;8(3):e019652. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Owens AP, Ballard C, Beigi M, Kalafatis C, Brooker H, Lavelle G, et al. Implementing remote memory clinics to enhance clinical care during and after COVID-19. Front Psychiatry. 2020;11:579934. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Geddes MR, O'Connell ME, Fisk JD, Gauthier S, Camicioli R, Ismail Z, et al. Alzheimer Society of Canada Task Force on Dementia Care Best Practices for COVID‐19. Remote cognitive and behavioral assessment: report of the Alzheimer Society of Canada task force on dementia care best practices for COVID-19. Alzheimers Dement (Amst). 2020;12(1):e12111. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by G Eysenbach, T de Azevedo Cardoso; submitted 29.01.24; peer-reviewed by J Gao, MJ Moore; comments to author 20.02.24; revised version received 05.03.24; accepted 19.03.24; published 19.04.24.

©Josephine McMurray, AnneMarie Levy, Wei Pang, Paul Holyoke. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 19.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

  • Introduction
  • Article Information

HR indicates hazard ratio.

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn
  • CME & MOC

Kendall EK , Olaker VR , Kaelber DC , Xu R , Davis PB. Association of SARS-CoV-2 Infection With New-Onset Type 1 Diabetes Among Pediatric Patients From 2020 to 2021. JAMA Netw Open. 2022;5(9):e2233014. doi:10.1001/jamanetworkopen.2022.33014

Manage citations:

© 2024

  • Permissions

Association of SARS-CoV-2 Infection With New-Onset Type 1 Diabetes Among Pediatric Patients From 2020 to 2021

  • 1 Center for Artificial Intelligence in Drug Discovery, Case Western Reserve University School of Medicine, Cleveland, Ohio
  • 2 The Center for Clinical Informatics Research and Education, The MetroHealth System, Cleveland, Ohio
  • 3 Center for Community Health Integration, Case Western Reserve University School of Medicine, Cleveland, Ohio

Incidence of new-onset type 1 diabetes (T1D) increased during the COVID-19 pandemic, 1 and this increase has been associated with SARS-CoV-2 infection. 2 The US Centers for Disease Control and Prevention reported that pediatric patients with COVID-19 were more likely to be diagnosed with diabetes after infection, although types 1 and 2 were not separated. 3 Therefore, whether COVID-19 was associated with new-onset T1D among youths remains unclear. This cohort study assessed whether there was an increase in new diagnoses of T1D among pediatric patients after COVID-19.

Data were obtained using TriNetX Analytics Platform, a web-based database of deidentified electronic health records of more than 90 million patients, from the Global Collaborative Network, which includes 74 large health care organizations across 50 US states and 14 countries with diverse representation of geographic regions, self-reported race, age, income, and insurance types. 4 The MetroHealth System institutional review board deemed the study exempt because it was determined to be non–human participant research. The study followed the STROBE reporting guideline.

The study population comprised pediatric patients in 2 cohorts: (1) patients aged 18 years or younger with SARS-CoV-2 infection between March 2020 and December 2021 and (2) patients aged 18 years or younger without SARS-CoV-2 infection but with non–SARS-CoV-2 respiratory infection during the same period. SARS-CoV-2 infection was defined as described in prior studies. 5 These cohorts were subdivided into groups aged 0 to 9 years and 10 to 18 years.

Cohorts were propensity score matched (1:1 using nearest-neighbor greedy matching) for demographics and family history of diabetes ( Table ). Risk of new diagnosis of T1D within 1, 3, and 6 months after infection were compared between matched cohorts using hazard ratios (HRs) and 95% CIs. Statistical analyses were conducted in the TriNetX Analytics Platform. Further details and analyses from the TriNetX database are given in the eMethods in the Supplement .

The Table shows population characteristics before and after matching. The study population included 1 091 494 pediatric patients: 314 917 with COVID-19 and 776 577 with non–COVID-19 respiratory infections. The matched cohort included 571 256 pediatric patients: 285 628 with COVID-19 and 285 628 with non–COVID-19 respiratory infections. By 6 months after COVID-19, 123 patients (0.043%) had received a new diagnosis of T1D, but only 72 (0.025%) were diagnosed with T1D within 6 months after non–COVID-19 respiratory infection. At 1, 3, and 6 months after infection, risk of diagnosis of T1D was greater among those infected with SARS-CoV-2 compared with those with non–COVID-19 respiratory infection (1 month: HR, 1.96 [95%CI, 1.26-3.06]; 3 months: HR, 2.10 [95% CI, 1.48-3.00]; 6 months: HR, 1.83 [95% CI, 1.36-2.44]) and in subgroups of patients aged 0 to 9 years, a group unlikely to develop type 2 diabetes, and 10 to 18 years ( Figure ). Similar increased risks were observed among children infected with SARS-CoV-2 compared with other control cohorts at 6 months (fractures: HR, 2.09 [95% CI, 1.41- 3.10]; well child visits: HR, 2.10 [95% CI, 1.61- 2.73]).

In this study, new T1D diagnoses were more likely to occur among pediatric patients with prior COVID-19 than among those with other respiratory infections (or with other encounters with health systems). Respiratory infections have previously been associated with onset of T1D, 6 but this risk was even higher among those with COVID-19 in our study, raising concern for long-term, post–COVID-19 autoimmune complications among youths. Study limitations include potential biases owing to the observational and retrospective design of the electronic health record analysis, including the possibility of misclassification of diabetes as type 1 vs type 2, and the possibility that additional unidentified factors accounted for the association. Results should be confirmed in other populations. The increased risk of new-onset T1D after COVID-19 adds an important consideration for risk-benefit discussions for prevention and treatment of SARS-CoV-2 infection in pediatric populations.

Accepted for Publication: August 6, 2022.

Published: September 23, 2022. doi:10.1001/jamanetworkopen.2022.33014

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2022 Kendall EK et al. JAMA Network Open .

Corresponding Author: Rong Xu, PhD, Sears Tower T303, Center for Artificial Intelligence in Drug Discovery ( [email protected] ); Pamela B. Davis, MD, PhD, Sears Tower T402, Center for Community Health Integration ( [email protected] ), Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH 44106.

Author Contributions : Ms Kendall and Ms Olaker had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Kendall, Xu, Davis.

Acquisition, analysis, or interpretation of data: Kendall, Olaker, Kaelber, Xu.

Drafting of the manuscript: Kendall, Olaker.

Critical revision of the manuscript for important intellectual content: Kendall, Kaelber, Xu, Davis.

Statistical analysis: Kendall, Olaker, Xu.

Obtained funding: Xu.

Administrative, technical, or material support: All authors.

Supervision: Kaelber, Xu, Davis.

Conflict of Interest Disclosures: Dr Kaelber reported receiving grants from the National Institutes of Health during the conduct of the study. No other disclosures were reported.

Funding/Support : This study was supported by grants AG057557 (Dr Xu), AG061388 (Dr Xu), AG062272 (Dr Xu), and AG076649 (Drs Xu and Davis) from the National Institute on Aging; grant R01AA029831 (Drs Xu and Davis) from the National Institute on Alcohol Abuse and Alcoholism; grant UG1DA049435 from the National Institute on Drug Abuse, and grant 1UL1TR002548-01 from the Clinical and Translational Science Collaborative of Cleveland.

Role of the Funder/Sponsor : The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research – Types, Methods and Examples

Research – Types, Methods and Examples

Table of Contents

What is Research

Definition:

Research refers to the process of investigating a particular topic or question in order to discover new information , develop new insights, or confirm or refute existing knowledge. It involves a systematic and rigorous approach to collecting, analyzing, and interpreting data, and requires careful planning and attention to detail.

History of Research

The history of research can be traced back to ancient times when early humans observed and experimented with the natural world around them. Over time, research evolved and became more systematic as people sought to better understand the world and solve problems.

In ancient civilizations such as those in Greece, Egypt, and China, scholars pursued knowledge through observation, experimentation, and the development of theories. They explored various fields, including medicine, astronomy, and mathematics.

During the Middle Ages, research was often conducted by religious scholars who sought to reconcile scientific discoveries with their faith. The Renaissance brought about a renewed interest in science and the scientific method, and the Enlightenment period marked a major shift towards empirical observation and experimentation as the primary means of acquiring knowledge.

The 19th and 20th centuries saw significant advancements in research, with the development of new scientific disciplines and fields such as psychology, sociology, and computer science. Advances in technology and communication also greatly facilitated research efforts.

Today, research is conducted in a wide range of fields and is a critical component of many industries, including healthcare, technology, and academia. The process of research continues to evolve as new methods and technologies emerge, but the fundamental principles of observation, experimentation, and hypothesis testing remain at its core.

Types of Research

Types of Research are as follows:

  • Applied Research : This type of research aims to solve practical problems or answer specific questions, often in a real-world context.
  • Basic Research : This type of research aims to increase our understanding of a phenomenon or process, often without immediate practical applications.
  • Experimental Research : This type of research involves manipulating one or more variables to determine their effects on another variable, while controlling all other variables.
  • Descriptive Research : This type of research aims to describe and measure phenomena or characteristics, without attempting to manipulate or control any variables.
  • Correlational Research: This type of research examines the relationships between two or more variables, without manipulating any variables.
  • Qualitative Research : This type of research focuses on exploring and understanding the meaning and experience of individuals or groups, often through methods such as interviews, focus groups, and observation.
  • Quantitative Research : This type of research uses numerical data and statistical analysis to draw conclusions about phenomena or populations.
  • Action Research: This type of research is often used in education, healthcare, and other fields, and involves collaborating with practitioners or participants to identify and solve problems in real-world settings.
  • Mixed Methods Research : This type of research combines both quantitative and qualitative research methods to gain a more comprehensive understanding of a phenomenon or problem.
  • Case Study Research: This type of research involves in-depth examination of a specific individual, group, or situation, often using multiple data sources.
  • Longitudinal Research: This type of research follows a group of individuals over an extended period of time, often to study changes in behavior, attitudes, or health outcomes.
  • Cross-Sectional Research : This type of research examines a population at a single point in time, often to study differences or similarities among individuals or groups.
  • Survey Research: This type of research uses questionnaires or interviews to gather information from a sample of individuals about their attitudes, beliefs, behaviors, or experiences.
  • Ethnographic Research : This type of research involves immersion in a cultural group or community to understand their way of life, beliefs, values, and practices.
  • Historical Research : This type of research investigates events or phenomena from the past using primary sources, such as archival records, newspapers, and diaries.
  • Content Analysis Research : This type of research involves analyzing written, spoken, or visual material to identify patterns, themes, or messages.
  • Participatory Research : This type of research involves collaboration between researchers and participants throughout the research process, often to promote empowerment, social justice, or community development.
  • Comparative Research: This type of research compares two or more groups or phenomena to identify similarities and differences, often across different countries or cultures.
  • Exploratory Research : This type of research is used to gain a preliminary understanding of a topic or phenomenon, often in the absence of prior research or theories.
  • Explanatory Research: This type of research aims to identify the causes or reasons behind a particular phenomenon, often through the testing of theories or hypotheses.
  • Evaluative Research: This type of research assesses the effectiveness or impact of an intervention, program, or policy, often through the use of outcome measures.
  • Simulation Research : This type of research involves creating a model or simulation of a phenomenon or process, often to predict outcomes or test theories.

Data Collection Methods

  • Surveys : Surveys are used to collect data from a sample of individuals using questionnaires or interviews. Surveys can be conducted face-to-face, by phone, mail, email, or online.
  • Experiments : Experiments involve manipulating one or more variables to measure their effects on another variable, while controlling for other factors. Experiments can be conducted in a laboratory or in a natural setting.
  • Case studies : Case studies involve in-depth analysis of a single case, such as an individual, group, organization, or event. Case studies can use a variety of data collection methods, including interviews, observation, and document analysis.
  • Observational research : Observational research involves observing and recording the behavior of individuals or groups in a natural setting. Observational research can be conducted covertly or overtly.
  • Content analysis : Content analysis involves analyzing written, spoken, or visual material to identify patterns, themes, or messages. Content analysis can be used to study media, social media, or other forms of communication.
  • Ethnography : Ethnography involves immersion in a cultural group or community to understand their way of life, beliefs, values, and practices. Ethnographic research can use a range of data collection methods, including observation, interviews, and document analysis.
  • Secondary data analysis : Secondary data analysis involves using existing data from sources such as government agencies, research institutions, or commercial organizations. Secondary data can be used to answer research questions, without collecting new data.
  • Focus groups: Focus groups involve gathering a small group of people together to discuss a topic or issue. The discussions are usually guided by a moderator who asks questions and encourages discussion.
  • Interviews : Interviews involve one-on-one conversations between a researcher and a participant. Interviews can be structured, semi-structured, or unstructured, and can be conducted in person, by phone, or online.
  • Document analysis : Document analysis involves collecting and analyzing written documents, such as reports, memos, and emails. Document analysis can be used to study organizational communication, policy documents, and other forms of written material.

Data Analysis Methods

Data Analysis Methods in Research are as follows:

  • Descriptive statistics : Descriptive statistics involve summarizing and describing the characteristics of a dataset, such as mean, median, mode, standard deviation, and frequency distributions.
  • Inferential statistics: Inferential statistics involve making inferences or predictions about a population based on a sample of data, using methods such as hypothesis testing, confidence intervals, and regression analysis.
  • Qualitative analysis: Qualitative analysis involves analyzing non-numerical data, such as text, images, or audio, to identify patterns, themes, or meanings. Qualitative analysis can be used to study subjective experiences, social norms, and cultural practices.
  • Content analysis: Content analysis involves analyzing written, spoken, or visual material to identify patterns, themes, or messages. Content analysis can be used to study media, social media, or other forms of communication.
  • Grounded theory: Grounded theory involves developing a theory or model based on empirical data, using methods such as constant comparison, memo writing, and theoretical sampling.
  • Discourse analysis : Discourse analysis involves analyzing language use, including the structure, function, and meaning of words and phrases, to understand how language reflects and shapes social relationships and power dynamics.
  • Network analysis: Network analysis involves analyzing the structure and dynamics of social networks, including the relationships between individuals and groups, to understand social processes and outcomes.

Research Methodology

Research methodology refers to the overall approach and strategy used to conduct a research study. It involves the systematic planning, design, and execution of research to answer specific research questions or test hypotheses. The main components of research methodology include:

  • Research design : Research design refers to the overall plan and structure of the study, including the type of study (e.g., observational, experimental), the sampling strategy, and the data collection and analysis methods.
  • Sampling strategy: Sampling strategy refers to the method used to select a representative sample of participants or units from the population of interest. The choice of sampling strategy will depend on the research question and the nature of the population being studied.
  • Data collection methods : Data collection methods refer to the techniques used to collect data from study participants or sources, such as surveys, interviews, observations, or secondary data sources.
  • Data analysis methods: Data analysis methods refer to the techniques used to analyze and interpret the data collected in the study, such as descriptive statistics, inferential statistics, qualitative analysis, or content analysis.
  • Ethical considerations: Ethical considerations refer to the principles and guidelines that govern the treatment of human participants or the use of sensitive data in the research study.
  • Validity and reliability : Validity and reliability refer to the extent to which the study measures what it is intended to measure and the degree to which the study produces consistent and accurate results.

Applications of Research

Research has a wide range of applications across various fields and industries. Some of the key applications of research include:

  • Advancing scientific knowledge : Research plays a critical role in advancing our understanding of the world around us. Through research, scientists are able to discover new knowledge, uncover patterns and relationships, and develop new theories and models.
  • Improving healthcare: Research is instrumental in advancing medical knowledge and developing new treatments and therapies. Clinical trials and studies help to identify the effectiveness and safety of new drugs and medical devices, while basic research helps to uncover the underlying causes of diseases and conditions.
  • Enhancing education: Research helps to improve the quality of education by identifying effective teaching methods, developing new educational tools and technologies, and assessing the impact of various educational interventions.
  • Driving innovation: Research is a key driver of innovation, helping to develop new products, services, and technologies. By conducting research, businesses and organizations can identify new market opportunities, gain a competitive advantage, and improve their operations.
  • Informing public policy : Research plays an important role in informing public policy decisions. Policy makers rely on research to develop evidence-based policies that address societal challenges, such as healthcare, education, and environmental issues.
  • Understanding human behavior : Research helps us to better understand human behavior, including social, cognitive, and emotional processes. This understanding can be applied in a variety of settings, such as marketing, organizational management, and public policy.

Importance of Research

Research plays a crucial role in advancing human knowledge and understanding in various fields of study. It is the foundation upon which new discoveries, innovations, and technologies are built. Here are some of the key reasons why research is essential:

  • Advancing knowledge: Research helps to expand our understanding of the world around us, including the natural world, social structures, and human behavior.
  • Problem-solving: Research can help to identify problems, develop solutions, and assess the effectiveness of interventions in various fields, including medicine, engineering, and social sciences.
  • Innovation : Research is the driving force behind the development of new technologies, products, and processes. It helps to identify new possibilities and opportunities for improvement.
  • Evidence-based decision making: Research provides the evidence needed to make informed decisions in various fields, including policy making, business, and healthcare.
  • Education and training : Research provides the foundation for education and training in various fields, helping to prepare individuals for careers and advancing their knowledge.
  • Economic growth: Research can drive economic growth by facilitating the development of new technologies and innovations, creating new markets and job opportunities.

When to use Research

Research is typically used when seeking to answer questions or solve problems that require a systematic approach to gathering and analyzing information. Here are some examples of when research may be appropriate:

  • To explore a new area of knowledge : Research can be used to investigate a new area of knowledge and gain a better understanding of a topic.
  • To identify problems and find solutions: Research can be used to identify problems and develop solutions to address them.
  • To evaluate the effectiveness of programs or interventions : Research can be used to evaluate the effectiveness of programs or interventions in various fields, such as healthcare, education, and social services.
  • To inform policy decisions: Research can be used to provide evidence to inform policy decisions in areas such as economics, politics, and environmental issues.
  • To develop new products or technologies : Research can be used to develop new products or technologies and improve existing ones.
  • To understand human behavior : Research can be used to better understand human behavior and social structures, such as in psychology, sociology, and anthropology.

Characteristics of Research

The following are some of the characteristics of research:

  • Purpose : Research is conducted to address a specific problem or question and to generate new knowledge or insights.
  • Systematic : Research is conducted in a systematic and organized manner, following a set of procedures and guidelines.
  • Empirical : Research is based on evidence and data, rather than personal opinion or intuition.
  • Objective: Research is conducted with an objective and impartial perspective, avoiding biases and personal beliefs.
  • Rigorous : Research involves a rigorous and critical examination of the evidence and data, using reliable and valid methods of data collection and analysis.
  • Logical : Research is based on logical and rational thinking, following a well-defined and logical structure.
  • Generalizable : Research findings are often generalized to broader populations or contexts, based on a representative sample of the population.
  • Replicable : Research is conducted in a way that allows others to replicate the study and obtain similar results.
  • Ethical : Research is conducted in an ethical manner, following established ethical guidelines and principles, to ensure the protection of participants’ rights and well-being.
  • Cumulative : Research builds on previous studies and contributes to the overall body of knowledge in a particular field.

Advantages of Research

Research has several advantages, including:

  • Generates new knowledge: Research is conducted to generate new knowledge and understanding of a particular topic or phenomenon, which can be used to inform policy, practice, and decision-making.
  • Provides evidence-based solutions : Research provides evidence-based solutions to problems and issues, which can be used to develop effective interventions and strategies.
  • Improves quality : Research can improve the quality of products, services, and programs by identifying areas for improvement and developing solutions to address them.
  • Enhances credibility : Research enhances the credibility of an organization or individual by providing evidence to support claims and assertions.
  • Enables innovation: Research can lead to innovation by identifying new ideas, approaches, and technologies.
  • Informs decision-making : Research provides information that can inform decision-making, helping individuals and organizations make more informed and effective choices.
  • Facilitates progress: Research can facilitate progress by identifying challenges and opportunities and developing solutions to address them.
  • Enhances understanding: Research can enhance understanding of complex issues and phenomena, helping individuals and organizations navigate challenges and opportunities more effectively.
  • Promotes accountability : Research promotes accountability by providing a basis for evaluating the effectiveness of policies, programs, and interventions.
  • Fosters collaboration: Research can foster collaboration by bringing together individuals and organizations with diverse perspectives and expertise to address complex issues and problems.

Limitations of Research

Some Limitations of Research are as follows:

  • Cost : Research can be expensive, particularly when large-scale studies are required. This can limit the number of studies that can be conducted and the amount of data that can be collected.
  • Time : Research can be time-consuming, particularly when longitudinal studies are required. This can limit the speed at which research findings can be generated and disseminated.
  • Sample size: The size of the sample used in research can limit the generalizability of the findings to larger populations.
  • Bias : Research can be affected by bias, both in the design and implementation of the study, as well as in the analysis and interpretation of the data.
  • Ethics : Research can present ethical challenges, particularly when human or animal subjects are involved. This can limit the types of research that can be conducted and the methods that can be used.
  • Data quality: The quality of the data collected in research can be affected by a range of factors, including the reliability and validity of the measures used, as well as the accuracy of the data entry and analysis.
  • Subjectivity : Research can be subjective, particularly when qualitative methods are used. This can limit the objectivity and reliability of the findings.
  • Accessibility : Research findings may not be accessible to all stakeholders, particularly those who are not part of the academic or research community.
  • Interpretation : Research findings can be open to interpretation, particularly when the data is complex or contradictory. This can limit the ability of researchers to draw firm conclusions.
  • Unforeseen events : Unexpected events, such as changes in the environment or the emergence of new technologies, can limit the relevance and applicability of research findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is Art

What is Art – Definition, Types, Examples

What is Anthropology

What is Anthropology – Definition and Overview

What is Literature

What is Literature – Definition, Types, Examples

Economist

Economist – Definition, Types, Work Area

Anthropologist

Anthropologist – Definition, Types, Work Area

What is History

What is History – Definitions, Periods, Methods

  • Warning Signs and Symptoms
  • Mental Health Conditions
  • Common with Mental Illness
  • Mental Health By the Numbers
  • Individuals with Mental Illness
  • Family Members and Caregivers
  • Kids, Teens and Young Adults
  • Veterans & Active Duty
  • Identity and Cultural Dimensions
  • Frontline Professionals
  • Mental Health Education
  • Support Groups
  • NAMI HelpLine
  • Publications & Reports
  • Podcasts and Webinars
  • Video Resource Library
  • Justice Library
  • Find Your Local NAMI
  • Find a NAMIWalks
  • Attend the NAMI National Convention
  • Fundraise Your Way
  • Create a Memorial Fundraiser
  • Pledge to Be StigmaFree
  • Awareness Events
  • Share Your Story
  • Partner with Us
  • Advocate for Change
  • Policy Priorities
  • NAMI Advocacy Actions
  • Policy Platform
  • Crisis Intervention
  • State Fact Sheets
  • Public Policy Reports

type of research based on method

  • About Mental Illness

Psychotherapy

Psychotherapy, also known as “talk therapy,” is when a person speaks with a trained therapist in a safe and confidential environment to explore and understand feelings and behaviors and gain coping skills.

During individual talk therapy sessions, the conversation is often led by the therapist and can touch on topics such as past or current problems, experiences, thoughts, feelings or relationships experienced by the person while the therapist helps make connections and provide insight.

Studies have found individual psychotherapy to be effective at improving symptoms in a wide array of mental illnesses, making it both a popular and versatile treatment. It can also be used for families, couples or groups. Best practice for treating many mental health conditions includes a combination of medication and therapy.

Popular Types Of Psychotherapy

Therapists offer many different types of psychotherapy. Some people respond better to one type of therapy than another, so a psychotherapist will take things like the nature of the problem being treated and the person’s personality into account when determining which treatment will be most effective.

Cognitive Behavioral Therapy

Cognitive behavioral therapy (CBT) focuses on exploring relationships among a person’s thoughts, feelings and behaviors. During CBT a therapist will actively work with a person to uncover unhealthy patterns of thought and how they may be causing self-destructive behaviors and beliefs.

By addressing these patterns, the person and therapist can work together to develop constructive ways of thinking that will produce healthier behaviors and beliefs. For instance, CBT can help someone replace thoughts that lead to low self-esteem (“I can’t do anything right”) with positive expectations (“I can do this most of the time, based on my prior experiences”).

The core principles of CBT are identifying negative or false beliefs and testing or restructuring them. Oftentimes someone being treated with CBT will have homework in between sessions where they practice replacing negative thoughts with with more realistic thoughts based on prior experiences or record their negative thoughts in a journal.

Studies of CBT have shown it to be an effective treatment for a wide variety of mental illnesses, including depression, anxiety disorders, bipolar disorder, eating disorders and schizophrenia. Individuals who undergo CBT show changes in brain activity, suggesting that this therapy actually improves your brain functioning as well.

Cognitive behavioral therapy has a considerable amount of scientific data supporting its use and many mental health care professionals have training in CBT, making it both effective and accessible. More are needed to meet the public health demand, however.

Dialectical Behavior Therapy (DBT)

Dialectical behavior therapy (DBT) was originally developed to treat chronically suicidal individuals with borderline personality disorder (BPD). Over time, DBT has been adapted to treat people with multiple different mental illnesses, but most people who are treated with DBT have BPD as a primary diagnosis.

DBT is heavily based on CBT with one big exception: it emphasizes validation, or accepting uncomfortable thoughts, feelings and behaviors instead of struggling with them. By having an individual come to terms with the troubling thoughts, emotions or behaviors that they struggle with, change no longer appears impossible and they can work with their therapist to create a gradual plan for recovery.

The therapist’s role in DBT is to help the person find a balance between acceptance and change. They also help the person develop new skills, like coping methods and mindfulness practices, so that the person has the power to improve unhealthy thoughts and behaviors. Similar to CBT, individuals undergoing DBT are usually instructed to practice these new methods of thinking and behaving as homework between sessions. Improving coping strategies is an essential aspect of successful DBT treatment.

Studies have shown DBT to be effective at producing significant and long-lasting improvement for people experiencing a mental illness. It helps decrease the frequency and severity of dangerous behaviors, uses positive reinforcement to motivate change, emphasizes the individual’s strengths and helps translate the things learned in therapy to the person’s everyday life.

Eye Movement Desensitization And Reprocessing Therapy (EMDR)

Eye movement desensitization and reprocessing therapy (EMDR) is used to treat PTSD. A number of studies have shown it can reduce the emotional distress resulting from traumatic memories.

EMDR replaces negative emotional reactions to difficult memories with less-charged or positive reactions or beliefs. Performing a series of back and forth, repetitive eye movements for 20-30 seconds can help individuals change these emotional reactions.

Therapists refer to this protocol as “dual stimulation.” During the therapy, an individual stimulates the brain with back and forth eye movements (or specific sequences of tapping or musical tones). Simultaneously, the individual stimulates memories by recalling a traumatic event. There is controversy about EMDR—and whether the benefit is from the exposure inherent in the treatment or if movement is an essential aspect of the treatment.

Exposure Therapy

Exposure therapy is a type of cognitive behavioral therapy that is most frequently used to treat obsessive-compulsive disorder, posttraumatic stress disorder and phobias. During treatment, a person works with a therapist to identify the triggers of their anxiety and learn techniques to avoid performing rituals or becoming anxious when they are exposed to them. The person then confronts whatever triggers them in a controlled environment where they can safely practice implementing these strategies.

There are two methods of exposure therapy. One presents a large amount of the triggering stimulus all at once (“flooding”) and the other presents small amounts first and escalates over time (“desensitization”). Both help the person learn how to cope with what triggers their anxiety so they can apply it to their everyday life.

Interpersonal Therapy

Interpersonal therapy focuses on the relationships a person has with others, with a goal of improving the person’s interpersonal skills. In this form of psychotherapy, the therapist helps people evaluate their social interactions and recognize negative patterns, like social isolation or aggression, and ultimately helps them learn strategies for understanding and interacting positively with others.

Interpersonal therapy is most often used to treat depression, but may be recommended with other mental health conditions.

Mentalization-Based Therapy

Mentalization-based therapy (MBT) can bring long-term improvement to people with BPD, according to randomized clinical trials. MBT is a kind of psychotherapy that engages and exercises the important skill called mentalizing.

Mentalizing refers to the intuitive process that gives us a sense of self. When people consciously perceive and understand their own inner feelings and thoughts, it’s mentalizing. People also use mentalizing to perceive the behavior of others and to speculate about their feelings and thoughts. Mentalizing thus plays an essential role in helping us connect with other people.

BPD often causes feelings described as “emptiness” or “an unstable self-image.” Relationships with others tend to be unstable as well. MBT addresses this emptiness or instability by teaching skills in mentalizing. The theory behind MBT is that people with BPD have a weak ability to mentalize about their own selves, leading to weak feelings of self, over-attachment to others, and difficulty empathizing with the inner lives of other people.

In MBT, a therapist encourages a person with BPD to practice mentalizing, particularly about the current relationship with the therapist. Since people with BPD may grow attached to therapists quickly, MBT takes this attachment into account. By becoming aware of attachment feelings in a safe therapeutic context, a person with BPD can increase their ability to mentalize and learn increased empathy.

Compared to other forms of psychotherapy such as cognitive-behavioral therapy, MBT is less structured and should typically be long-term. The technique can be carried out by non-specialist mental health practitioners in individual and group settings.

Psychodynamic Psychotherapy

The goal of psychodynamic therapy is to recognize negative patterns of behavior and feeling that are rooted in past experiences and resolve them. This type of therapy often uses open-ended questions and free association so that people have the opportunity to discuss whatever is on their minds. The therapist then works with the person to sift through these thoughts and identify unconscious patterns of negative behavior or feelings and how they have been caused or influenced by past experiences and unresolved feelings. By bringing these associations to the person’s attention they can learn to overcome the unhelpful behaviors and feelings which they caused.

Psychodynamic therapy is often useful for treating depression, anxiety disorders, borderline personality disorder, and other mental illnesses.

Therapy Pets

Spending time with domestic animals can reduce symptoms of anxiety, depression, fatigue and pain for many people. Hospitals, nursing homes and other medical facilities sometimes make use of this effect by offering therapy animals. Trained therapy pets accompanied by a handler can offer structured animal-assisted therapy or simply visit people to provide comfort.

Dogs are the most popular animals to work as therapy pets, though other animals can succeed as well if they are docile and respond to training. Hospitals make use of therapy pets particularly for patients with cancer, heart disease and mental health conditions. The pets that are certified to visit medical facilities meet a high standard of training and are healthy and vaccinated.

For people with a mental health condition, research has shown that time with pets reduces anxiety levels more than other recreational activities. Pets also provide a non-judgmental form of interaction that can motivate and encourage people, especially children. Veterans with PTSD have also found therapy pets helpful.

A session with a therapy pet and its handler may focus on specific goals such as learning a skill through human-animal interaction. Alternatively, simply spending time holding a therapy pet can have benefits such as lower anxiety levels.

Though more research is necessary to establish why animal therapy is effective, one theory is that humans evolved to be highly aware of our natural environment, including the animals around us. The sight of a calm animal reassures us that the environment is safe, thus reducing anxiety and increasing our own feelings of calm.

Therapy animals are not the same as service animals, who receive a higher level of training and learn specific tasks for assisting one person on a long-term basis. Service animals are considered working animals, not pets. They have shown some promise in helping people with mental health conditions, particularly PTSD and panic disorders.

type of research based on method

Know the warning signs of mental illness

type of research based on method

Learn more about common mental health conditions

NAMI HelpLine is available M-F, 10 a.m. – 10 p.m. ET. Call 800-950-6264 , text “helpline” to 62640 , or chat online. In a crisis, call or text 988 (24/7).

IMAGES

  1. Types of Research Methodology: Uses, Types & Benefits

    type of research based on method

  2. Types of Research by Method

    type of research based on method

  3. Research Methods

    type of research based on method

  4. Different Types of Research

    type of research based on method

  5. Types of Research

    type of research based on method

  6. Research

    type of research based on method

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Kinds and Classification of Research

  3. Types of research, Approaches of research and research methodology

  4. Metho 2: Types of Research

  5. 2.Types of Research in education

  6. Types of Research methodology

COMMENTS

  1. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  2. Types of Research

    Because exploratory research is based on the study of little-studied phenomena, it relies less on theory and more on the collection of data to identify patterns that explain these phenomena. ... Explanatory research is the most common type of research method and is responsible for establishing cause-and-effect relationships that allow ...

  3. Types of Research Designs Compared

    Types of research can be categorized based on the research aims, the type of data, and the subjects, timescale, and location of the research. FAQ ... you have to make more concrete decisions about your research methods and the details of the study. Read more about creating a research design. Other interesting articles. If you want to know ...

  4. Research Methodology

    This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses. Mixed-Methods Research Methodology. ... Flexibility: Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, ...

  5. A tutorial on methodological studies: the what, when, how and why

    As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). ... , or studies on the development of methods using consensus-based approaches.

  6. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  7. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  8. What Is a Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Other interesting articles.

  9. Choosing the Right Research Methodology: A Guide

    Understanding different research methods: There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based. Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research ...

  10. Research Methods

    You can also take a mixed methods approach, where you use both qualitative and quantitative research methods. Primary vs secondary data. Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys, observations and experiments). Secondary data are information that has already been collected by other researchers (e.g. in ...

  11. What are Different Research Approaches? Comprehensive Review of

    a comprehensive review of qualitative, quantitative, and mixed-method research methods. Each method is clearly defined and specifically discussed based on applications, types, advantages, and limitations to help researchers identify select the most relevant type based on each study and navigate accordingly. Keywords: Research methodology

  12. Research Methods

    To analyse data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyse the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. Thematic analysis.

  13. What are research methods?

    Closed-ended questionnaires/survey: These types of questionnaires or surveys are like "multiple choice" tests, where participants must select from a list of premade answers.According to the content of the question, they must select the one that they agree with the most. This approach is the simplest form of quantitative research because the data is easy to combine and quantify.

  14. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  15. What is Research? Definition, Types, Methods and Process

    Research methods refer to the specific approaches and techniques used to collect and analyze data in a research study. There are various types of research methods, and researchers often choose the most appropriate method based on their research question, the nature of the data they want to collect, and the resources available to them. Some ...

  16. What Is Research? Types and Methods

    Types and Methods was originally published on Forage. Research is the process of examining a hypothesis to make discoveries. Practically every career involves research in one form or another. Accountants research their client's history and financial documents to understand their financial situation, and data scientists perform research to ...

  17. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  18. Types of Research Methods (With Best Practices and Examples)

    Professionals use research methods while studying medicine, human behavior and other scholarly topics. There are two main categories of research methods: qualitative research methods and quantitative research methods. Quantitative research methods involve using numbers to measure data. Researchers can use statistical analysis to find ...

  19. 15 Types of Research Methods (2024)

    Types of Research Methods. Research methods can be broadly categorized into two types: quantitative and qualitative. Quantitative methods involve systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques, providing an in-depth understanding of a specific concept or phenomenon (Schweigert, 2021).

  20. Research Design

    This will guide your research design and help you select appropriate methods. Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.

  21. Types of Research

    Each type has its unique approach, methodology, and application area, making it essential to choose the right type for your specific research question or problem. This guide serves as a starting point for researchers to explore and select the most suitable research method for their needs, ensuring effective and reliable outcomes. Types of ...

  22. A method for identifying different types of university research teams

    Identifying research teams constitutes a fundamental step in team science research, and universities harbor diverse types of such teams. This study introduces a method and proposes algorithms for ...

  23. Boston Medical Center Study Furthers Understanding of Lung Regeneration

    BOSTON - Researchers at Boston Medical Center (BMC) and Boston University (BU) today announced findings from a new research study, published in Cell Stem Cell, detailing the development of a method for generating human alveolar epithelial type I cells (AT1s) from pluripotent stem cells (iPSCs).

  24. MEG-PPIS: a fast protein-protein interaction site prediction method

    Protein-protein interaction sites (PPIS) are crucial for deciphering protein action mechanisms and related medical research, which is the key issue in protein action research. Recent studies have shown that graph neural networks have achieved outstanding performance in predicting PPIS. ... a PPIS prediction method based on multi-scale graph ...

  25. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  26. What Is an Observational Study?

    Revised on June 22, 2023. An observational study is used to answer a research question based purely on what the researcher observes. There is no interference or manipulation of the research subjects, and no control and treatment groups. These studies are often qualitative in nature and can be used for both exploratory and explanatory research ...

  27. Journal of Medical Internet Research

    Background: With the rapid aging of the global population, the prevalence of mild cognitive impairment (MCI) and dementia is anticipated to surge worldwide. MCI serves as an intermediary stage between normal aging and dementia, necessitating more sensitive and effective screening tools for early identification and intervention. The BrainFx SCREEN is a novel digital tool designed to assess ...

  28. Association of SARS-CoV-2 Infection With New-Onset Type 1 Diabetes

    Incidence of new-onset type 1 diabetes (T1D) increased during the COVID-19 pandemic, 1 and this increase has been associated with SARS-CoV-2 infection. 2 The US Centers for Disease Control and Prevention reported that pediatric patients with COVID-19 were more likely to be diagnosed with diabetes after infection, although types 1 and 2 were not separated. 3 Therefore, whether COVID-19 was ...

  29. Research

    Research design: Research design refers to the overall plan and structure of the study, including the type of study (e.g., observational, experimental), the sampling strategy, and the data collection and analysis methods. Sampling strategy: Sampling strategy refers to the method used to select a representative sample of participants or units ...

  30. Psychotherapy

    Mentalization-Based Therapy. Mentalization-based therapy (MBT) can bring long-term improvement to people with BPD, according to randomized clinical trials. MBT is a kind of psychotherapy that engages and exercises the important skill called mentalizing. Mentalizing refers to the intuitive process that gives us a sense of self.