U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

A-level Psychology AQA Revision Notes

A-Level Psychology

A-level Psychology AQA Revision Notes

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, the use of research methods in psychological research: a systematised review.

psychology research methods review

  • 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
  • 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

www.frontiersin.org

Figure 1 . Systematised review procedure.

According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

www.frontiersin.org

Figure 2 . Journal article frequency.

www.frontiersin.org

Table 1 . Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

www.frontiersin.org

Figure 3 . Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

www.frontiersin.org

Table 2 . Research methods in psychology.

www.frontiersin.org

Figure 4 . Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

www.frontiersin.org

Table 3 . Sampling use in the field of psychology.

www.frontiersin.org

Figure 5 . Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

www.frontiersin.org

Table 4 . Design use in the field of psychology.

www.frontiersin.org

Figure 6 . Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

www.frontiersin.org

Table 5 . Data collection in the field of psychology.

www.frontiersin.org

Figure 7 . Data collection frequency in topics.

www.frontiersin.org

Table 6 . Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers

Google Scholar

American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191

PubMed Abstract | CrossRef Full Text | Google Scholar

Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708

CrossRef Full Text | Google Scholar

Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109

Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318

Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x

Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481

Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.

Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223

Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6

Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011

Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular

Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227

Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074

Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors

Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9

Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.

Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049

Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747

Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621

Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693

Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002

Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections

Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173

Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996

Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256

Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29

Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.

Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x

Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030

Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843

Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers

Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121

Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/

Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707

Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082

Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85

Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.

Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.

Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .

Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.

Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628

O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272

O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350

Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0

Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3

Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237

Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010

Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.

Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437

Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210

Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.

Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015

Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623

Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357

Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/

Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.

Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.

Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029

Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950

Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.

Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic

Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001

Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.

Reviewed by:

Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Salomé Elizabeth Scholtz, 22308563@nwu.ac.za

Library Home

Research Methods in Psychology - 4th American Edition

(40 reviews)

psychology research methods review

Carrie Cuttler, Washington State University

Rajiv S. Jhangiani, Kwantlen Polytechnic University

Dana C. Leighton, Texas A&M University, Texarkana

Copyright Year: 2019

ISBN 13: 9781999198107

Publisher: Kwantlen Polytechnic University

Language: English

Formats Available

Conditions of use.

Attribution-NonCommercial-ShareAlike

Learn more about reviews.

Reviewed by Beth Mechlin, Associate Professor of Psychology & Neuroscience, Earlham College on 3/19/24

This is an extremely comprehensive text for an undergraduate psychology course about research methods. It does an excellent job covering the basics of a variety of types of research design. It also includes important topics related to research... read more

Comprehensiveness rating: 5 see less

This is an extremely comprehensive text for an undergraduate psychology course about research methods. It does an excellent job covering the basics of a variety of types of research design. It also includes important topics related to research such as ethics, finding journal articles, and writing reports in APA format.

Content Accuracy rating: 5

I did not notice any errors in this text.

Relevance/Longevity rating: 5

The content is very relevant. It will likely need to be updated over time in order to keep research examples relevant. Additionally, APA formatting guidelines may need to be updated when a new publication manual is released. However, these should be easy updates for the authors to make when the time comes.

Clarity rating: 5

This text is very clear and easy to follow. The explanations are easy for college students to understand. The authors use a lot of examples to help illustrate specific concepts. They also incorporate a variety of relevant outside sources (such as videos) to provide additional examples.

Consistency rating: 5

The text is consistent and flows well from one section to the next. At the end of each large section (similar to a chapter) the authors provide key takeaways and exercises.

Modularity rating: 5

This text is very modular. It is easy to pick and choose which sections you want to use in your course when. Each section can stand alone fairly easily.

Organization/Structure/Flow rating: 5

The text is very well organized. Information flows smoothly from one topic to the next.

Interface rating: 5

The interface is great. The text is easy to navigate and the images display well (I only noticed 1 image in which the formatting was a tad off).

Grammatical Errors rating: 5

I did not notice any grammatical errors.

Cultural Relevance rating: 5

The text is culturally relevant.

This is an excellent text for an undergraduate research methods course in the field of Psychology. I have been using the text for my Research Methods and Statistics course for a few years now. This text focuses on research methods, so I do use another text to cover statistical information. I do highly recommend this text for research methods. It is comprehensive, clear, and easy for students to use.

Reviewed by William Johnson, Lecturer, Old Dominion University on 1/12/24

This textbook covers every topic that I teach in my Research Methods course aside from psychology careers (which I would not really expect it to cover). read more

This textbook covers every topic that I teach in my Research Methods course aside from psychology careers (which I would not really expect it to cover).

I have not noticed any inaccurate information (other than directed students to read Malcolm Gladwell). I appreciate that the textbook includes information on research errors that have not been supported by replication efforts, such as embodied cognition.

Many of the basic concepts of research methods are rather timeless, but I appreciate that the text includes newer research as examples while also including "classic" studies that exemplify different methods.

The writing is clear and simple. The keywords are bolded and reveal a definition when clicked, which students often find very helpful. Many of the figures are very helpful in helping students understand various methods (I really like the ones in the single-subject design subchapter).

The book is very consistent in its terminology and writing style, which I see as a positive compared to other open psychology textbooks where each chapter is written by subject matter experts (such as the NOBA intro textbook).

Modularity rating: 4

I teach this textbook almost entirely in order (except for moving chapters 12 & 13 earlier in the semester to aid students in writing Results sections in their final papers). I think that the organization and consistency of the book reduces its modularity, in that earlier chapters are genuinely helpful for later chapters.

Organization/Structure/Flow rating: 4

I preferred the organization of previous editions, which had "Theory in Research" as its own chapter. If I were organizing the textbook, I am not sure that I would have out descriptive or inferential statistics as the final two chapters (I would have likely put Chapter 11: Presenting Your Research as the final chapter). I also would not have put information about replicability and open science in the inferential statistics section.

The text is easy to read and the formatting is attractive. My only minor complaint is that some of the longer subchapters can be a pretty long scroll, but I understand the desire for their only to be one page per subchapter/topic.

I have not noticed any grammatical errors.

Cultural Relevance rating: 3

I do not think the textbook is insensitive, but there is not much thought given to adapting research instruments across cultures. For instance, talking about how different constructs might have different underlying distributions in different cultures would be useful for students. In the survey methods section, a discussion of back translation or emic personality trait measurement/development for example might be a nice addition.

I choose to use this textbook in my methods classes, but I do miss the organization of the previous American editions. Overall, I recommend this textbook to my colleagues.

Reviewed by Brianna Ewert, Psychology Instructor, Salish Kootenai College on 12/30/22

This text includes the majority of content included in our undergraduate Research Methods in Psychology course. The glossary provides concise definitions of key terms. This text includes most of the background knowledge we expect our students to... read more

Comprehensiveness rating: 4 see less

This text includes the majority of content included in our undergraduate Research Methods in Psychology course. The glossary provides concise definitions of key terms. This text includes most of the background knowledge we expect our students to have as well as skill-based sections that will support them in developing their own research projects.

The content I have read is accurate and error-free.

The content is relevant and up-to-date.

The text is clear and concise. I find it pleasantly readable and anticipate undergraduate students will find it readable and understandable as well.

The terminology appears to be consistent throughout the text.

The modular sections stand alone and lend themselves to alignment with the syllabus of a particular course. I anticipate readily selecting relevant modules to assign in my course.

The book is logically organized with clear and section headings and subheadings. Content on a particular topic is easy to locate.

The text is easy to navigate and the format/design are clean and clear. There are not interface issues, distortions or distracting format in the pdf or online versions.

The text is grammatically correct.

Cultural Relevance rating: 4

I have not found culturally insensitive and offensive language or content in the text. For my courses, I would add examples and supplemental materials that are relevant for students at a Tribal College.

This textbook includes supplemental instructor materials, included slides and worksheets. I plan to adopt this text this year in our Research Methods in Psychology course. I expect it to be a benefit to the course and students.

Reviewed by Sara Peters, Associate Professor of Psychology, Newberry College on 11/3/22

This text serves as an excellent resource for introducing survey research methods topics to undergraduate students. It begins with a background of the science of psychology, the scientific method, and research ethics, before moving into the main... read more

This text serves as an excellent resource for introducing survey research methods topics to undergraduate students. It begins with a background of the science of psychology, the scientific method, and research ethics, before moving into the main types of research. This text covers experimental, non-experimental, survey, and quasi-experimental approaches, among others. It extends to factorial and single subject research, and within each topic is a subset (such as observational research, field studies, etc.) depending on the section.

I could find no accuracy issues with the text, and appreciated the discussions of research and cited studies.

There are revised editions of this textbook (this being the 4th), and the examples are up to date and clear. The inclusion of exercises at the end of each chapter offer potential for students to continue working with material in meaningful ways as they move through the book and (and course).

The prose for this text is well aimed at the undergraduate population. This book can easily be utilized for freshman/sophomore level students. It introduces the scientific terminology surrounding research methods and experimental design in a clear way, and the authors provide extensive examples of different studies and applications.

Terminology is consistent throughout the text. Aligns well with other research methods and statistics sources, so the vocabulary is transferrable beyond the text itself.

Navigating this book is a breeze. There are 13 chapters, and each have subsections that can be assigned. Within each chapter subsection, there is a set of learning objectives, and paragraphs are mixed in with tables and figures for students to have different visuals. Different application assignments within each chapter are highlighted with boxes, so students can think more deeply given a set of constructs as they consider different information. The last subsection in each chapter has key summaries and exercises.

The sections and topics in this text are very straightforward. The authors begin with an introduction of psychology as a science, and move into the scientific method, research ethics, and psychological measurement. They then present multiple different research methodologies that are well known and heavily utilized within the social sciences, before concluding with information on how to present your research, and also analyze your data. The text even provides links throughout to other free resources for a reader.

This book can be navigated either online (using a drop-down menu), or as a pdf download, so students can have an electronic copy if needed. All pictures and text display properly on screen, with no distortions. Very easy to use.

There were no grammatical errors, and nothing distracting within the text.

This book includes inclusive material in the discussion of research ethics, as well as when giving examples of the different types of research approaches. While there is always room for improvement in terms of examples, I was satisfied with the breadth of research the authors presented.

This text provides an overview of both research methods, and a nice introduction to statistics for a social science student. It would be a good choice for a survey research methods class, and if looking to change a statistics class into an open resource class, could also serve as a great resource.

Reviewed by Sharlene Fedorowicz, Adjunct Professor, Bridgewater State University on 6/23/21

The comprehensiveness of this book was appropriate for an introductory undergraduate psychology course. Critical topics are covered that are necessary for psychology students to obtain foundational learning concepts for research. Sections within... read more

The comprehensiveness of this book was appropriate for an introductory undergraduate psychology course. Critical topics are covered that are necessary for psychology students to obtain foundational learning concepts for research. Sections within the text and each chapter provide areas for class discussion with students to dive deeper into key concepts for better learning comprehension. The text covered APA format along with examples of research studies to supplement the learning. The text segues appropriately by introducing the science of psychology, followed by scientific method and ethics before getting into the core of scientific research in the field of psychology. Details are provided in quantitative and qualitative research, correlations, surveys, and research design. Overall, the text is fully comprehensive and necessary introductory research concepts.

The text appears to be accurate with no issues related to content.

Relevance/Longevity rating: 4

The text provided relevant research information to support the learning. The content was up-to-date with a variety of different examples related to the different fields of psychology. However, some topics such as in the pseudoscience section were not very relevant and bordered the line of beliefs. Here, more current or relevant solid examples would provide more relevancy in this part of the text. Bringing in more solid or concrete examples that are more current for students may have been more appropriate such as lack of connection between information found on social media versus real science.

The language and flow of the chapters accompanied by the terms, concepts, and examples of applied research allows for clarity of learning content. Terms were introduced at the appropriate time with the support of concepts and current or classic research. The writing style flows nicely and segues easily from concept to concept. The text is easy for students to understand and grasp the details related to psychological research and science.

The text provides consistency in the outline of each chapter. The beginning section chapter starts objectives as an overview to help students unpack the learning content. Key terms are consistently bolded followed by concept or definition and relevant examples. Research examples are pertinent and provide students with an opportunity to understand application of the contents. Practice exercises are provided with in the chapter and at the and in order for students to integrate learning concepts from within the text.

Sections and subsections are clearly organized and divided appropriately for ease-of-use. The topics are easily discernible and follow the flow of ideal learning routines for students. The sections and subsections are consistently outlined for each concept module. The modularity provides consistency allowing for students to focus on content rather than trying to discern how to pull out the information differently from each chapter or section. In addition, each section and subsection allow for flexibility in learning or expanding concepts within the content area.

The organization of the textbook was easy to follow and each major topic was outlined clearly. However, the chapter on presenting research may be more appropriately placed toward the end of the book rather than in the middle of the chapters related to research and research design. In addition, more information could have been provided upfront around APA format so that students could identify the format of citations within the text as practice for students throughout the book.

The interface of the book lends itself to a nice layout with appropriate examples and links to break up the different sections in the chapters. Examples where appropriate and provided engagement opportunities for the students for each learning module. Images and QR codes or easily viewed and used. Key terms are highlighted in relevant figures, graphs, and tables were appropriately placed. Overall, the interface of the text assisted with the organization and flow of learning material.

No grammatical errors were detected in this book.

The text appears to be culturally sensitive and not offensive. A variety of current and classic research examples are relevant. However, more examples of research from women, minorities, and ethnicities would strengthen the culture of this textbook. Instructors may need to supplement some research in this area to provide additional inclusivity.

Overall, I was impressed by the layout of the textbook and the ease of use. The layout provides a set of expectations for students related to the routine of how the book is laid out and how students will be able to unpack the information. Research examples were relevant, although I see areas where I will supplement information. The book provides opportunities for students to dive deeper into the learning and have rich conversations in the classroom. I plan to start using the psychology textbook for my students starting next year.

Reviewed by Anna Behler, Assistant Professo, North Carolina State University on 6/1/21

The text is very thorough and covers all of the necessary topics for an undergraduate psychology research methods course. There is even coverage of qualitative research, case studies, and the replication crisis which I have not seen in some other... read more

The text is very thorough and covers all of the necessary topics for an undergraduate psychology research methods course. There is even coverage of qualitative research, case studies, and the replication crisis which I have not seen in some other texts.

There were no issues with the accuracy of the text.

The content is very up to date and relevant for a research methods course. The only updates that will likely be necessary in the coming years are updates to examples and modifications to the section on APA formatting.

The clarity of the writing was good, and the chapters were written in a way that was accessible and easy to follow.

I did not note any issues with consistency.

Each chapter is divided into multiple subsections. This makes the chapters even easier to read, as they are broken down into short and easy to navigate sections. These sections make it easy to assign readings as needed depending on which topics are being covered in class.

Organization/Structure/Flow rating: 3

The organization was one of the few areas of weakness, and I felt that the chapters were ordered somewhat oddly. However, this is something that is easily fixed, as chapters (and even subsections) can be assigned in whatever order is needed.

There were no issues of note with the interface, and the PDF of the text was easy to navigate.

The text was well written and there were no grammatical/writing errors of note.

Overall, the book did not contain any notable instances of bias. However, it would probably be appropriate to offer a more thorough discussion of the WEIRD problem in psychology research.

Reviewed by Seth Surgan, Professor, Worcester State University on 5/24/21

Pitched very well for a 200-level Research Methods course. This text provided students with solid basis for class discussion and the further development of their understanding of fundamental concepts. read more

Pitched very well for a 200-level Research Methods course. This text provided students with solid basis for class discussion and the further development of their understanding of fundamental concepts.

No issues with accuracy.

Coverage was on target, relevant, and applicable, with good examples from a variety of subfields within Psychology.

Clearly written -- students often struggle with the dry, technical nature of concepts in Research Methods. Part of the reason I chose this text in the first place was how favorably it compared to other options in terms of clarity.

No problems with inconsistent of shifting language. This is extremely important in Research Methods, where there are many closely related terms. Language was consistent and compatible with other textbook options that were available to my students.

Chapters are broken down into sections that are reasonably sized and conceptually appropriate.

The organization of this textbook fit perfectly with the syllabus I've been using (in one form or another) for 15+ years.

This textbook was easy to navigate and available in a variety of formats.

No problems at all.

Examples show an eye toward inclusivity. I did not detect any insensitive or offensive examples or undertones.

I have used this textbook for a 200-level Research Methods course run over a single summer session. This was my first experience using an OER textbook and I don't plan on going back.

Reviewed by Laura Getz, Assistant Professor, University of San Diego on 4/29/21

The topics covered seemed to be at an appropriate level for beginner undergraduate psychology students; the learning objectives for each subsection and the key takeaways and exercises for each chapter are also very helpful in guiding students’... read more

The topics covered seemed to be at an appropriate level for beginner undergraduate psychology students; the learning objectives for each subsection and the key takeaways and exercises for each chapter are also very helpful in guiding students’ attention to what is most relevant. The glossary is also thorough and a good resource for clear definitions. I would like to see a final chapter on a “big picture” or integrating key ideas of replication, meta-analysis, and open science.

Content Accuracy rating: 4

For the most part, I like the way information is presented. I had a few specific issues with definitions for ordinal variables being quantitative (1st, 2nd, 3rd aren’t really numbers as much as ranked categories), the lack of specificity about different forms of validity (face, content, criterion, and discriminant all just labeled “validity” whereas internal and external validity appear in different sections), and the lack of clear distinction between correlational and quasi-experimental variables (e.g., in some places, country of origin is listed as making a design quasi-experimental, but in other chapters it is defined as correlational).

Some of the specific studies/experiments mentioned do not seem like the best or most relevant for students to learn about the topics, but for the most part, content is up-to-date and can definitely be updated with new studies to illustrate concepts with relative ease.

Besides the few concepts I listed above in “accuracy”, I feel the text was very accessible, provides clear definitions, and many examples to illustrate any potential technical/jargon terms.

I did not notice any issues with inconsistent terms except for terms that do have more than one way of describing the same concept (e.g., 2-sample vs. independent samples t-test)

I assigned the chapters out of order with relative ease, and students did not comment about it being burdensome to navigate.

The order of chapters sometimes did not make sense to me (e.g., Experimental before Non-experimental designs, Quasi-experimental designs separate from other non-experimental designs, waiting until Chapter 11 to talk about writing), but for the most part, the chapter subsections were logical and clear.

Interface rating: 4

I had no issues navigating the online version of the textbook other than taking a while to figure out how to move forward and back within the text itself rather than going back to the table of contents (this might just be a browser issue, but is still worth considering).

No grammatical errors of note.

There was nothing explicitly insensitive or offensive about the text, but there were many places where I felt like more focus on diversity and individual differences could be helpful. For example, ethics and history of psychological testing would definitely be a place to bring in issues of systemic racism and/or sexism and a focus on WEIRD samples (which is mentioned briefly at another point).

I was very satisfied with this free resource overall, and I recommend it for beginning level undergraduate psychology research methods courses.

Reviewed by Laura Stull, Associate Professor, Anderson University on 4/23/21

This book covers essential topics and areas related to conducting introductory psychological research. It covers all critical topics, including the scientific method, research ethics, research designs, and basic descriptive and inferential... read more

This book covers essential topics and areas related to conducting introductory psychological research. It covers all critical topics, including the scientific method, research ethics, research designs, and basic descriptive and inferential statistics. It even goes beyond other texts in terms of offering specific guidance in areas like how to conduct research literature searches and psychological measurement development. The only area that appears slightly lacking is detailed guidance in the mechanics of writing in APA style (though excellent basic information is provided in chapter 11).

All content appears accurate. For example, experimental designs discussed, descriptive and inferential statistical guidance, and critical ethical issues are all accurately addressed, See comment on relevance below regarding some outdated information.

Relevance/Longevity rating: 3

Chapter 11 on APA style does not appear to cover the most current version of the APA style guide (7th edition). While much of the information in Chapter 11 is still current, there are specifics that did change from 6th to 7th edition of the APA manual and so, in order to be current, this information would have to be supplemented with external sources.

The book is extremely well organized, written in language and terms that should be easily understood by undergraduate freshmen, and explains all necessary technical jargon.

The text is consistent throughout in terms of terminology and the organizational framework (which aids in the readability of the text).

The text is divided into intuitive and common units based on basic psychological research methodology. It is clear and easy to follow and is divided in a way that would allow omission of some information if necessary (such as "single subject research") or reorganization of information (such as presenting survey research before experimental research) without disruption to the course as a whole.

As stated previously, the book is organized in a clear and logical fashion. Not only are the chapters presented in a logical order (starting with basic and critical information like overviews of the scientific method and research ethics and progressing to more complex topics like statistical analyses).

No issues with interface were noted. Helpful images/charts/web resources (e.g., Youtube videos) are embedded throughout and are even easy to follow in a print version of the text.

No grammatical issues were noted.

No issues with cultural bias are noted. Examples are included that address topics that are culturally sensitive in nature.

I ordered a print version of the text so that I could also view it as students would who prefer a print version. I am extremely impressed with what is offered. It covers all of the key content that I am currently covering with a (non-open source) textbook in an introduction to research methods course. The only concern I have is that APA style is not completely current and would need to be supplemented with a style guide. However, I consider this a minimal issue given all of the many strengths of the book.

Reviewed by Anika Gearhart, Instructor (TT), Leeward Community College on 4/22/21

Includes the majority of elements you expect from a textbook covering research methods. Some topics that could have been covered in a bit more depth were factorial research designs (no coverage of 3 or more independent variables) and external... read more

Includes the majority of elements you expect from a textbook covering research methods. Some topics that could have been covered in a bit more depth were factorial research designs (no coverage of 3 or more independent variables) and external validity (or the validities in general).

Nothing found that was inaccurate.

Looks like a few updates could be made to chapter 11 to bring it up to date with APA 7. Otherwise, most examples are current.

Very clear, a great fit for those very new to the topic.

The framework is clear and logical, and the learning objectives are very helpful for orienting the reader immediately to the main goals of each section.

Subsections are well-organized and clear. Titles for sections and subsections are clear.

Though I think the flow of this textbook for the most part is excellent, I would make two changes: move chapter 5 down with the other chapters on experimental research and move chapter 11 to the very end. I feel that this would allow for a more logical presentation of content.

The webpage navigation is easy to use and intuitive, the ebook download works as designed, and the page can be embedded directly into a variety of LMS sites or used with a variety of devices.

I found no grammatical errors in this book.

While there were some examples of studies that included participants from several cultures, the book does not touch on ecological validity, an important external validity issue tied to cultural psychology, and there is no mention of the WEIRD culture issue in psychology, which seems somewhat necessary when orienting new psychology students to research methods today.

I currently use and enjoy this textbook in my research methods class. Overall, it has been a great addition to the course, and I am easily able to supplement any areas that I feel aren't covered with enough breadth.

Reviewed by Amy Foley, Instructor/Field & Clinical Placement Coordinator, University of Indianapolis on 3/11/21

This text provides a comprehensive overview of the research process from ideation to proposal. It covers research designs common to psychology and related fields. read more

This text provides a comprehensive overview of the research process from ideation to proposal. It covers research designs common to psychology and related fields.

Accurate information!

This book is current and lines up well with the music therapy research course I teach as a supplemental text for students to understand research designs.

Clear language for psychology and related fields.

The format of the text is consistent. I appreciate the examples, different colored boxes, questions, and links to external sources such as video clips.

It is easy to navigate this text by chapters and smaller units within each chapter. The only confusion that has come from using this text includes the fact that the larger units have roman numerals and the individual chapters have numbers. I have told students to "read unit six" and they only read the small chapter 6, not the entire unit for example.

Flows well!

I have not experienced any interface issues.

I have not found any grammar errors.

Book appears culturally relevant.

This is a great resource for research methods courses in psychology or related fields. I am glad to have used several chapters of this text within the music therapy research course I teach where students learn about research design and then create their own research proposal.

Reviewed by Veronica Howard, Associate Professor, University of Alaska Anchorage on 1/11/21, updated 1/11/21

VERY impressed by the coverage of single subject designs. I would recommend this content to colleagues. read more

VERY impressed by the coverage of single subject designs. I would recommend this content to colleagues.

Content appears accurate.

By expanding to include more contemporary research perspectives, the authors have created a wonderful dynamic that permits the text to be the foundation for many courses as well as revision and remixing for other authors.

Book easy to read, follow.

Consistency rating: 4

Content overall consistent. Only mild inconsistency in writing style between chapters.

Exceptionally modular. All content neatly divided into units with smaller portions. This would be a great book to use in a course that meets bi-weekly, or adapted into other formats.

Content organized in a clear and logical fashion, and would guide students through a semester-long course on research methods, starting with review content, broad overview of procedures (including limitations), then highlighting less common (though relevant) procedures.

Rich variety of formats for use.

No errors found.

I would appreciate more cultural examples.

Reviewed by Greg Mullin, Associate Professor, Bunker Hill Community College on 12/30/20, updated 1/6/21

I was VERY pleased with the comprehensiveness of the text. I believe it actually has an edge over the publisher-based text that I've been using for years. Each major topic was thoroughly covered with more than enough detail on individual concepts. read more

I was VERY pleased with the comprehensiveness of the text. I believe it actually has an edge over the publisher-based text that I've been using for years. Each major topic was thoroughly covered with more than enough detail on individual concepts.

I did not find any errors within the text. The authors provided an unbiased representation of research methods in psychology.

The content connects to classic, timeless examples in the field, but also mixes in a fair amount of more current, relatable examples. I feel like I'll be able to use this version of the text for many years without its age showing.

The authors present a clear and efficient writing style throughout that is rich with relatable examples. The only area that may be a bit much for undergraduate-level student understanding is the topic of statistics. I personally scale back my discussion of statistics in my Intro to Research Methods course, but for those that prefer a deeper dive, the higher-level elements are there.

I did not notice any shifts with the use of terminology or with the structural framework of the text. The text is very consistent and organized in an easily digestible way.

The authors do a fantastic job breaking complex topics down into manageable chunks both as a whole and within chapters. As I was reading, I could easily see how I could align my current approach of teaching Intro to Research Methods with their modulated presentation of the material.

I effortlessly moved through the text given the structural organization. All topics are presented in a logical fashion that allowed each message to be delivered to the reader with ease.

I read the text through the PDF version and found no issue with the interface. All image and text-based material was presented clearly.

I cannot recall coming across any grammatical errors. The text is very well written.

I did not find the text to be culturally insensitive in any way. The authors use inclusive language and even encourage that style of writing in the chapter on Presenting Your Research. I would have liked to see more cross-cultural research examples and more of an extended effort to include the theme of diversity throughout, but at no point did I find the text to be offensive.

This is a fantastic text and I look forward to adopting it for my Intro to Research Methods course in the Spring. :)

Reviewed by Maureen O'Connell, Adjunct Professor, Bunker Hill Community College on 12/15/20, updated 12/18/20

This text edition has covered all ideas and areas of research methods in psychology. It has provided a glossary of terms, sample APA format, and sample research papers.  read more

This text edition has covered all ideas and areas of research methods in psychology. It has provided a glossary of terms, sample APA format, and sample research papers. 

The content is unbiased, accurate, and I did not find any errors in the text. 

The content is current and up-to-date. I found that the text can be added to should material change, the arrangement of the text/content makes it easily accessible to add material, if necessary. 

The text is clear, easy to understand, simplistic writing at times, but I find this text easy for students to comprehend. All text is relevant to the content of behavioral research. 

The text and terminology is consistent. 

The text is organized well and sectioned appropriately. The information is presented in an easy-to-read format, with sections that can be assigned at various points during the semester and the reader can easily locate this. 

The topics in the text are organized in a logical and clear manner. It flows really well. 

The text is presented well, including charts, diagrams, and images. There did not appear to be any confusion with this text. 

The text contains no grammatical errors.

The text was culturally appropriate and not offensive. Clear examples of potential biases were outlined in this text which I found quite helpful for the reader. 

Overall, I found this to be a great edition. Much of the time I spend researching outside material for students has been included in this text. I enjoyed the format, easier to navigate, helpful to students by providing an updated version of discussions and practice assignments, and visually more appealing. 

Reviewed by Brittany Jeye, Assistant Professor of Psychology, Worcester State University on 6/26/20

All of the main topics in a Research Methods course are covered in this textbook (e.g., scientific method, ethics, measurement, experimental design, hypothesis testing, APA style, etc.). Some of these topics are not covered as in-depth as in other... read more

All of the main topics in a Research Methods course are covered in this textbook (e.g., scientific method, ethics, measurement, experimental design, hypothesis testing, APA style, etc.). Some of these topics are not covered as in-depth as in other Research Method textbooks I have used previously, but this actually may be a positive depending on the students and course level (that is, students may only need a solid overview of certain topics without getting overwhelmed with too many details). It also gives the instructor the ability to add content as needed, which helps with flexibility in course design.

I did not note any errors or inaccurate/biasing statements in the text.

For the most part, everything was up to date. There was a good mix of classic research and newer studies presented and/or used as examples, which kept the chapters interesting, topical and relevant. I only noted the section on APA Style in the chapter “Presenting Your Research” which may need some updating to be in line with the new APA 7th edition. However, there should be only minor edits needed (the chapter itself was great overview and introduction to the main points of APA style) and it looks like they should be relatively easy to implement.

The text was very well-written and was presented at an accessible level for undergraduates new to Research Methods. Terms were well-defined with a helpful glossary at the end of the textbook.

The consistent structure of the textbook is huge positive. Each chapter begins with learning objectives and ends with bulleted key takeaways. There are also good exercises and learning activities for students at the end of each chapter. Instructors may need to add their own activities for chapters that do not go into a lot of depth (there are also instructor resources online, which may have more options available).

This is one of the biggest strengths of this textbook, in my opinion. I appreciate how each chapter is broken down into clearly defined subsections. The chapters and the subsections, in particular, are not lengthy, which is great for students’ learning. These subsections could be reorganized and used in a variety of ways to suit the needs of a particular course (or even as standalone subsections).

The topics were presented in a logical manner. As mentioned above, since the textbook is very modular, I feel that you could easily rearrange the chapters to fit your needs (for example, presenting survey design before experimental research or making the presenting your research section a standalone unit).

I downloaded the textbook as an ebook, which was very easy to use/navigate. There were no problems reading any of the text or figures/tables. I also appreciated that you could open the ebook using a variety of apps (Kindle, iBook, etc.) depending on your preference (and this is good for students who have a variety of technical needs).

There were no grammatical errors noted.

The examples were inclusive of races, ethnicity and background and there were not any examples that were culturally insensitive or offensive in any way. In future iterations of the replicability section, it may be beneficial to touch upon the “weird” phenomena in psychology research (that many studies use participants who are western, educated and from industrialized, rich and democratic countries) as a point to engage students in improving psychological practices.

I will definitely consider switching to this textbook in the future for Research Methods.

Reviewed by Alice Frye, Associate Teaching Professor, University of Massachusetts Lowell on 6/22/20

Hits all the necessary marks from ways of knowing to measurement, research designs, and presentation. Comparable in detail and content to other Research Methods texts I have used for teaching. read more

Hits all the necessary marks from ways of knowing to measurement, research designs, and presentation. Comparable in detail and content to other Research Methods texts I have used for teaching.

Correct and to the point. Complex ideas such as internal consistency reliability and discriminant validity are well handled--correct descriptions that are also succinct and articulated simply and with clear examples that are easy for a student reader to grasp.

Seems likely to have good staying power. One area that has changed quickly in the past is the usefulness of various research data bases. So it is possible that portion could become more quickly outdated, but there is no predicting that. The current descriptions are useful.

Very clearly written without being condescending, overly casual or clunky.

Excellent consistency throughout in terms of organization, language usage, level of detail and tone.

Imho this is one of the particular strengths of the text. Chapters are well divided into discrete parts, which seems likely to be a benefit in cohorts of students who are increasingly accustomed to digesting small amounts of information.

Well organized, straightforward structure that is maintained throughout.

No problems with the interface.

The grammar level is another notable strength. Ideas are articulated clearly, and with sophistication, but in a syntactically very straightforward manner.

The text isn't biased or offensive. I wish that to illustrate various points and research designs it had drawn more frequently on research studies that incorporate a specific focus on race and ethnicity.

This is a very good text. As good as any for profit text I have used to teach a research methods course, if not better.

Reviewed by Lauren Mathieu-Frasier, Adjunct Instructor, University of Indianapolis on 1/13/20

As other reviews have mentioned, this textbook provides a comprehensive look at multiple concepts for an introductory course in research methods in psychology. Some of the concepts (i.e., variables, external validity) are briefly described and... read more

As other reviews have mentioned, this textbook provides a comprehensive look at multiple concepts for an introductory course in research methods in psychology. Some of the concepts (i.e., variables, external validity) are briefly described and glossed over that it will take additional information, examples, and reinforcement from instructors in the classroom. Other sections and concepts, like ethics or reporting of research were well-described and thorough.

It appeared that the information was accurate, error-free, and unbiased.

The information is up-to-date. In the section on APA presentation, it looks like the minor adjustments to the APA Publication Manual 7th Edition would need to be included. However, this section gives a good foundation and the instructor can easily implement the changes.

Clarity rating: 4

The text is clearly written written and provides an appropriate context when terminology is used.

There aren't any issues with consistency in the textbook.

The division of smaller sections can be beneficial when reading it and assigning it to classes. The sections are clearly organized based on learning objectives.

The textbook is organized in a logical, clear manner. There may be topics that instructors choose to present in a different manner (non-experimental and survey research prior to experimental). However, this doesn't generally impact the organization and flow of the book.

While reading and utilizing the book, there weren't any navigation issues that could impact the readability of the book. Students could find this textbook easy to use.

Grammatical errors were not noted.

There weren't any issues with cultural sensitivity in the examples of studies used in the textbook.

Reviewed by Tiffany Kindratt, Assistant Professor, University of Texas at Arlington on 1/1/20

The text is comprehensive with an effective glossary of terms at the end. It would be beneficial to include additional examples and exercises for students to better understand concepts covered in Chapter II, Overview of the Scientific Method,... read more

The text is comprehensive with an effective glossary of terms at the end. It would be beneficial to include additional examples and exercises for students to better understand concepts covered in Chapter II, Overview of the Scientific Method, Chapter IV, Psychological Measurement, and Chapter XII Descriptive Statistics.

The text is accurate and there are minimal type/grammatical errors throughout. The verbiage is written in an unbiased manner consistently throughout the textbook.

The content is up-to-date, and examples can be easily updated for future versions. As a public health instructor, I would be interested in seeing examples of community-based examples in future versions. The current examples provided are relevant for undergraduate public health students as well as psychology students.

The text is written in a clear manner. The studies used can be easily understood by undergraduate students in other social science fields, such as public health. More examples and exercises using inferential statistics would be helpful for students to better grasp the concepts.

The framework for each chapter and terminology used are consistent. It is helpful that each section within each chapter begins with learning objectives and the chapter ends with key takeaways and exercises.

The text is clearly divided into sections within each chapter. When I first started reviewing this textbook, I thought each section was actually a very short chapter. I would recommend including a listing of all of the objectives covered in each chapter at the beginning to improve the modularity of the text.

Some of the topics do not follow a logical order. For example, it would be more appropriate to discuss ethics before providing the overview of the scientific method. It would be better to discuss statistics used to determine results before describing how to write manuscripts. However, the text is written in a way that that the chapters could be assigned to students in a different order without impacting the students’ comprehension of the concepts.

I did not encounter any interface issues when reviewing this text. All links worked and there were no distortions of the images or charts that may confuse the reader. There are several data tables throughout the text which are left-aligned and there is a large amount of empty white space next it. I would rearrange the text in future versions to make better use of this space.

The text contains minimal grammatical errors.

The examples are culturally relevant. I did not see any examples that may be considered culturally insensitive or offensive in any way.

As an instructor for an undergraduate public health sciences and methods course, I will consider using some of the content in this text to supplement the current textbook in the future.

Reviewed by Mickey White, Assistant Professor, East Tennessee State University on 10/23/19

The table of contents is well-formatted and comprehensive. Easy to navigate and find exactly what is needed, students would be able to quickly find needed subjects. read more

The table of contents is well-formatted and comprehensive. Easy to navigate and find exactly what is needed, students would be able to quickly find needed subjects.

Content appears to be accurate and up-to-date.

This text is useful and relevant, particularly with regard to expressing and reporting descriptive statistics and results. As APA updates, the text will be easy to edit, as the sections are separated.

Easy to read and engaging.

Chapters were laid out in a consistent manner, which allows readers to know what is coming. The subsections contained a brief overview and terminology was consistent throughout. The glossary added additional information.

Sections and subsections are delineated in a usable format.

The key takeaways were useful, including the exercises at the end of each chapter.

Reading the book online is a little difficult to navigate page-by-page, but e-pub and PDF formats are easy to navigate.

No errors noted.

Would be helpful to have a clearer exploration of cultural factors impacting research, including historical bias in assessment and research outside of research ethics.

Reviewed by Robert Michael, Assistant Professor, University of Louisiana at Lafayette on 10/14/19

Successfully spans the gamut of topics expected in a Research Methods textbook. Some topics are covered in-depth, while others are addressed only at a surface level. Instructors may therefore need to carefully arrange class material for topics in... read more

Successfully spans the gamut of topics expected in a Research Methods textbook. Some topics are covered in-depth, while others are addressed only at a surface level. Instructors may therefore need to carefully arrange class material for topics in which depth of knowledge is an important learning outcome.

The factual content was error-free, according to my reading. I did spot a few grammatical and typographical errors, but they were infrequent and minor.

Great to see nuanced—although limited—discussion of issues with Null Hypothesis Significance Testing, Reproducibility in Psychological Science, and so forth. I expect that these areas are likely to grow in future editions, perhaps supplementing or even replacing more traditional material.

Extremely easy to read with multiple examples throughout to illustrate the principles being covered. Many of these examples are "classics" that students can easily relate to. Plus, who doesn't like XKCD comics?

The textbook is structured sensibly. At times, certain authors' "voices" seemed apparent in the writing, but I suspect this variability is unlikely to be noticed by or even bothersome to the vast majority of readers.

The topics are easily divisible and seem to follow routine expectations. Instructors might find it beneficial and/or necessary to incorporate some of the statistical thinking and learning into various earlier chapters to facilitate student understanding in-the-moment, rather than trying to leave all the statistics to the end.

Sensible and easy-to-follow structure. As per "Modularity", the Statistical sections may benefit from instructors folding in such learning throughout, rather than only at the end.

Beautifully presented, crisp, easy-to-read and navigate. Caveat: I read this online, in a web-browser, on only one device. I haven't tested across multiple platforms.

High quality writing throughout. Only a few minor slip-ups that could be easily fixed.

Includes limited culturally relevant material where appropriate.

Reviewed by Matthew DeCarlo, Assistant Professor, Radford University on 6/26/19

The authors do a great job of simplifying the concepts of research methods and presenting them in a way that is understandable. There is a tradeoff between brevity and depth here. Faculty who adopt this textbook may need to spend more time in... read more

The authors do a great job of simplifying the concepts of research methods and presenting them in a way that is understandable. There is a tradeoff between brevity and depth here. Faculty who adopt this textbook may need to spend more time in class going in depth into concepts, rather than relying on the textbook for all of the information related to key concepts. The text does not cover qualitative methods in detail.

The textbook provides an accurate picture of research methods. The tone is objective and without bias.

The textbook is highly relevant and up to date. Examples are drawn from modern theories and articles.

The writing is a fantastic mix of objective and authoritative while also being approachable.

The book coheres well together. Each chapter and section are uniform.

This book fits very well within a traditional 16 week semester, covering roughly a chapter per week. One could take out specific chapters and assign them individually if research methods is taught in a different way from a standard research textbook.

Content is very well organized. The table of contents is easy to navigate and each chapter is presented in a clear and consistent manner. The use of a two-tier table of contents is particularly helpful.

Standard pressbooks interface, which is great. Uses all of the standard components of Pressbooks well, though the lack of H5P and interactive content is a drawback.

I did not notice any grammar errors.

Cultural Relevance rating: 2

The book does not deal with cultural competence and humility in the research process. Integration of action research and decolonization perspectives would be helpful.

Reviewed by Christopher Garris, Associate Professor, Metropolitan State University of Denver on 5/24/19

Most content areas in this textbook were covered appropriately extensively. Notably, this textbook included some content that is commonly missing in other textbooks (e.g. presenting your research). There were some areas where more elaboration... read more

Most content areas in this textbook were covered appropriately extensively. Notably, this textbook included some content that is commonly missing in other textbooks (e.g. presenting your research). There were some areas where more elaboration and more examples were needed. For example, the section covering measurement validities included all the important concepts, but needed more guidance for student comprehension. Also, the beginning chapters on 'common sense' reasoning and pseudoscience seemed a little too brief.

Overall, this textbook appeared to be free from glaring errors. There were a couple of instances of concern, but were not errors, per se. For example, the cut-off for Cronbach's alpha was stated definitively at .80, while this value likely would be debated among researchers.

This textbook was presented in such a way that seemed protect it from becoming obsolete within the next few years. This is important for continued, consistent use of the book. The authors have revised this book, and those revisions are clearly summarized in the text. Importantly, the APA section of the textbook appears to be up-to-date. Also, the use of QR codes throughout the text is a nice touch that students may appreciate.

Connected to comprehensiveness, there are some important content areas that I felt were lacking in elaboration and examples (e.g. testing the validity of measurement; introduction of experimental design), which inhibits clarity. Overall, however, the topics seemed to be presented in a straightforward, accessible manner. The textbook includes links to informative videos and walk-throughs where appropriate, which seem to be potentially beneficial for student comprehension. The textbook includes tools designed to aid learning, namely "Key Takeaways" and "Exercises" sections at the end of most modules, but not all. "Key Takeaways" seemed valuable, as they were a nice bookend to the learning objectives stated at the beginning of each module. "Exercises" did not appear to be as valuable, especially for the less-motivated student. On their face, these seemed to be more designed for instructors to use as class activities/active learning. Lastly, many modules of the textbook were text-heavy and visually unappealing. While this is superficial, the inclusion of additional graphics, example boxes, or figures in these text-heavy modules might be beneficial.

The textbook appeared to be internally consistent with its approach and use of terminology.

The textbook had a tendency to 'throw out' big concepts very briefly in earlier modules (e.g. sampling, experimental/non-experimental design), and then cover them in more detail in later modules. This would have been less problematic if the text would explicitly inform the student that these concepts would be elaborated upon later. Beyond this issue, the textbook seems to lend itself to being divided up and used on module-by-module basis.

The organization of the chapters did not make intuitive sense to me. The fact that correlation followed experimental research, and that descriptive research was the second-to-last module in the sequence was confusing. That said, textbook is written in such a way that an instructor easily assign the modules in the order that works best for their class.

Overall, the interface worked smoothly and there were few technical issues. Where there were issues (e.g. inconsistent spacing between lines and words), they were negligible.

The text seemed to be free from glaring grammatical problems.

Because this is a methodology textbook, it does not lend itself to too much cultural criticism. That said, the book did not rely on overly controversial examples, but also didn't shy away from important cultural topics (e.g. gender stereotypes, vaccines).

Reviewed by Michel Heijnen, Assistant Professor, University of North Carolina Wilmington on 3/27/18

The book covers all areas related to research methods, not only for the field of psychology, but also to other related fields like exercise science. Topics include ethics, developing a research questions, experimental designs, non-experimental... read more

The book covers all areas related to research methods, not only for the field of psychology, but also to other related fields like exercise science. Topics include ethics, developing a research questions, experimental designs, non-experimental designs, and basic statistics, making this book a great resource for undergraduate research methods classes.

Reviewed content is accurate and seems free of any personal bias.

The topic of research methods in general is not expected to change quickly. It is not expected that this text will become obsolete in the near future. Furthermore, for both the field of psychology as well as other related fields, the examples will continue to have an application to explain certain concepts and will not be outdated soon, even with new research emerging every day.

The text is written so an undergraduate student should be able to understand the concepts. The examples provided in the text greatly contribute to the understanding of the topics and the proposed exercises at the end of each chapter will further apply the knowledge.

The layout and writing style are consistent throughout the text.

Layout of the text is clear, with multiple subsection within each chapter. Each chapter can easily be split into multiple subsection to assign to students. No evidence of self-refers was observed, and individual chapters could be assigned to students without needed to read all preceding chapters. For example, Chapter 4 may not be particularly useful to students outside of psychology, but an instructor can easily reorganize the text and skip this chapter while students can still understand following chapters.

Topics are addressed in a logical manner. Overall, an introduction to research is provided first (including ethics to research), which is followed by different types of research, and concludes with types of analysis.

No images or tables are distorted, making the text easy to read.

No grammatical errors observed in text.

Text is not offensive and does not appear to be culturally insensitive.

I believe that this book is a great resource and, as mentioned previously, can be used for a wider audience than just psychology as the basics of research methods can be applied to various fields, including exercise science.

Reviewed by Chris Koch, Professor of Psychology, George Fox University on 3/27/18

All appropriate areas and topics are covered in the text. In that sense, this book is equivalent to other top texts dealing with research methods in psychology. The appeal of this book is the brevity and clarity. Therefore, some may find that,... read more

All appropriate areas and topics are covered in the text. In that sense, this book is equivalent to other top texts dealing with research methods in psychology. The appeal of this book is the brevity and clarity. Therefore, some may find that, although the topics are covered, topics may not be covered as thoroughly they might like. Overall, the coverage is solid for an introductory course in research methods.

In terms of presentation, this book could be more comprehensive. Each chapter does start with a set of learning objectives and ends with "takeaways" and a short set of exercises. However, it lacks detailed chapter outlines, summaries, and glossaries. Furthermore, an index does not accompany the text.

I found the book to be accurate with content being fairly presented. There was no underlying bias throughout the book.

This is an introductory text for research methods. The basics of research methods have been consistent for some time. The examples used in the text fit the concepts well. Therefore, it should not be quickly dated. It is organized in such a way that sections could be easily modified with more current examples as needed.

The text is easy to read. It is succinct yet engaging. Examples are clear and terminology is adequately defined.

New terms and concepts are dealt with chapter by chapter. However, those things which go across chapters are consistently presented.

The material for each chapter is presented in subsections with each subsection being tied to a particular learning objective. It is possible to use the book by subsection instead of by chapter. In fact, I did that during class by discussing the majority of one chapter, discussing another chapter, and then covering what I previously skipped,

In general, the book follows a "traditional" organization, matching the organization of many competing books. As mentioned in regard to modularity, I did not follow the organization of the book exactly as it was laid out. This may not necessarily reflect poorly on the book, however, since I have never followed the order of any research methods book. My three exams covered chapter 1 through 4, chapters 5, 6, part of 8, and chapters 7, the remainder of 8, 9, and 10. Once we collected data I covered chapters 11 through 13.

Interface rating: 3

The text and images are clear and distortion free. The text is available in several formats including epub, pdf, mobi, odt, and wxr. Unfortunately, the electronic format is not taken full advantage of. The text could be more interactive. As it is, it is just text and images. Therefore, the interface could be improved.

The book appeared to be well written and edited.

I did not find anything in the book that was culturally insensitive or offensive. However, more examples of cross-cultural research could be included.

I was, honestly, surprised by how much I liked the text. The material was presented in easy to follow format that is consistent with how I think about research methods. That made the text extremely easy to use. Students also thought the book was highly accessible Each chapter was relatively short but informative and easy to read.

Reviewed by Kevin White, Assistant Professor, East Carolina University on 2/1/18

This book covers all relevant topics for an introduction to research methods course in the social sciences, including measurement, sampling, basic research design, and ethics. The chapters were long enough to be somewhat comprehensive, but short... read more

This book covers all relevant topics for an introduction to research methods course in the social sciences, including measurement, sampling, basic research design, and ethics. The chapters were long enough to be somewhat comprehensive, but short enough to be digestible for students in an introductory-level class. Student reviews of the book have so far been very positive. The only section of the text for which more detail may be helpful is 2.3 (Reviewing the Research Literature), in which more specific instructions related to literature searches may be helpful to students.

I did not notice any issues related to accuracy. Content appeared to be accurate, error-free, and unbiased.

One advantage of this book is that it is relevant to other applied fields outside of psychology (e.g., social work, counseling, etc.). Also, the exercises at the end of chapter sections are helpful.

The clarity of the text provides students with succinct definitions for research-related concepts, without unnecessary discipline-specific jargon. One suggestion for future editions would be to make the distinctions between different types of non-experimental research a bit more clear for students in introductory classes (e.g., "Correlational Research" in Section 7.2).

Formatting and terminology was consistent throughout this text.

A nice feature of this book is that instructors can select individual sections within chapters, or even jump between sections within chapters. For example, Section 1.4 may not fit for a class that is less clinically-oriented in nature.

The flow of the text was appropriate, with ethics close to the beginning of the book (and an entire chapter devoted to it), and descriptive/inferential statistics at the end.

I did not notice any problems related to interface. I had no trouble accessing or reading the text, and the images were clear.

The text contained no discernible grammatical errors.

The book does not appear to be culturally insensitive in any discernible way, and explicitly addresses prejudice in research (e.g., Section 5.2). However, I think that continuing to add more examples that relate to specific marginalized groups would help improve the text (and especially exercises).

Overall, this book is very useful for an introductory research methods course in psychology or social work, and I highly recommend.

Reviewed by Elizabeth Do, Instructor, Virginia Commonwealth University on 2/1/18

Although this textbook does provide good information regarding introductory concepts necessary for the understanding of correlational designs, and is presented in a logical order. It does not, however, cover qualitative methodologies, or research... read more

Although this textbook does provide good information regarding introductory concepts necessary for the understanding of correlational designs, and is presented in a logical order. It does not, however, cover qualitative methodologies, or research ethics as it relates to other countries outside of the US.

There does not seem to be any errors within the text.

Since this textbook covers a topic that is unlikely to change over the years and it's content is up-to-date, it remains relevant to the field.

The textbook is written at an appropriate level for undergraduate students and is useful in that it does explain important terminology.

There does not seem to be any major inconsistencies within the text.

Overall, the text is very well organized - it is separated into chapters that are divided up into modules and within each module, there are clear learning objectives. It is also helpful that the textbook includes useful exercises for students to practice what they've read about from the text.

The topics covered by this textbook are presented in an order that is logical. The writing is clear and the examples are very useful. However, more information could be provided in some of the chapters and it would be useful to include a table of contents that links to the different chapters within the PDF copy, for reader's ease in navigation when looking for specific terms and/or topics.

Overall, the PDF copy of the textbook made it easy to read; however, there did seem to be a few links that were missing. Additionally, it would be helpful to have some of the graphs printed in color to help with ease of following explanations provided by the text. The inclusion of a table of contents would also be useful for greater ease with navigation.

There does not seem to be any grammatical errors in the textbook. Also, the textbook is written in a clear way, and the information flows nicely.

This textbook focuses primarily on examples from the United States. It does not seem to be culturally insensitive or offensive in anyway and I liked that it included content regarding the avoidance of biased language (chapter 11).

This textbook makes the material very accessible, and it is easy to read/follow examples.

psychology research methods review

Reviewed by Eric Lindsey, Professor, Penn State University Berks Campus on 2/1/18

The content of the Research Methods in Psychology textbook was very thorough and covered what I would consider to be the important concepts and issues pertaining to research methods. I would judge that the textbook has a comparable coverage of... read more

The content of the Research Methods in Psychology textbook was very thorough and covered what I would consider to be the important concepts and issues pertaining to research methods. I would judge that the textbook has a comparable coverage of information to other textbooks I have reviewed, including the current textbook I am using. The range of scholarly sources included in the textbook was good, with an appropriate balance between older and classic research examples and newer more cutting edge research information. Overall, the textbook provides substantive coverage of the science of conducting research in the field of psychology, supported by good examples, and thoughtful questions.

The textbook adopts a coherent and student-friendly format, and offers a precise introduction to psychological research methodology that includes consideration of a broad range of qualitative and quantitative methods to help students identify and evaluate the best approach for their research needs. The textbook offers a detailed review of the way that psychological researchers approach their craft. The author guides the reader through all aspects of the research process including formulating objectives, choosing research methods, securing research participants, as well as advice on how to effectively collect, analyze and interpret data and disseminate those findings to others through a variety of presentation and publication venues. The textbook offers relevant supplemental information in textboxes that is highly relevant to the material in the accompanying text and should prove helpful to learners. Likewise the graphics and figures that are included are highly relevant and clearly linked to the material presented in the text. The information covered by the textbook reflects an accurate summary of current techniques and methods used in research in the field of psychology. The presentation of information addresses the pros and cons of different research strategies in an objective and evenhanded way.

The range of scholarly sources included in the textbook was good, with an appropriate balance between older, classic research evidence and newer, cutting edge research. Overall, the textbook provides substantive coverage of the science on most topics in research methods of psychology, supported by good case studies, and thoughtful questions. The book is generally up to date, with adequate coverage of basic data collection methods and statistical techniques. Likewise the review of APA style guidelines is reflects the current manual and I like the way the author summarizes changes from the older version of the APA manual. The organization of the textbook does appear to lend itself to editing and adding new information with updates in the future.

I found the textbook chapters to be well written, in a straightforward yet conversational manner. It gives the reader an impression of being taught by a knowledgeable yet approachable expert. The writing style gives the learner a feeling of being guided through the lessons and supported in a very conversational approach. The experience of reading the textbook is less like being taught and more like a colleague sharing information. Furthermore, the style keeps the reader engaged but doesn't detract from its educational purpose. I also appreciate that the writing is appropriately concise. No explanations are so wordy as to overwhelm or lull the reader to sleep, but at the same time the information is not so vague that the reader can't understand the point at all.

The book’s main aim is to enable students to develop their own skills as researchers, so they can generate and advance common knowledge on a variety of psychological topics. The book achieves this objective by introducing its readers, step-by-step, to psychological research design, while maintaining an excellent balance between substance and attention grabbing examples that is uncommon in other research methods textbooks. Its accessible language and easy-to-follow structure and examples lend themselves to encouraging readers to move away from the mere memorization of facts, formulas and techniques towards a more critical evaluation of their own ideas and work – both inside and outside the classroom. The content of the chapters have a very good flow that help the reader to connect information in a progressive manner as they proceed through the textbook.

Each chapter goes into adequate depth in reviewing both past and current research related to the topic that it covers for an undergraduate textbook on research methods in psychology. The information within each chapter flows well from point-to-point, so that the reader comes away feeling like there is a progression in the information presented. The only limitation that I see is that I felt the author could do a little more to let the reader know how information is connected from chapter to chapter. Rather than just drawing the reader’s attention to things that were mentioned in previous chapters, it would be nice to have brief comments about how issues in one chapter relate to topics covered in previous chapters.

In my opinion the chapters are arranged in easily digestible units that are manageable in 30-40 minute reading sessions. In fact, the author designed the chapters of the textbook in a way to make it easy to chunk information, and start and stop to easily pick up where one leaves off from one reading session to another. I also found the flow of information to be appropriate, with chapters containing just the right amount of detail for use in my introductory course in research methods in psychology.

The book is organized into thirteen chapters. The order of the chapters offers a logical progression from a broad overview of information about the principles and theory behind research in psychology, to more specific issues concerning the techniques and mechanics of conducting research. Each chapter ends with a summary of key takeaways from the chapter and exercises that do more than ask for content regurgitation. I find the organization of the textbook to be effective, and matches my approach to the course very well. I would not make any changes to the overall format with the exception of moving chapter 11 on presenting research to the end of the textbook, after the chapters on statistical analysis and interpretation.

I found the quality of the appearance of the textbook to be very good. The textbook features appropriate text and section/header font sizes that allow for an adequate zooming level to read large or smalls sections of text, that will give readers flexibility to match their personal preference. There are learning objectives at the start of each chapter to help students know what to expect. Key terms are highlighted in a separate color that are easily distinguishable in the body of the page. There are very useful visuals in every chapter, including tables, figures, and graphs. Relevant supplemental information is also highlighted in well formatted text boxes that are color coded to indicate what type of information is included. My only criticism is that the photographs included in the text are of low quality, and there are so few in the textbook that I feel it would have been better to just leave them out.

I found no grammatical errors in my review of the textbook. The textbook is generally well written, and the style of writing is at a level that is appropriate for an undergraduate class.

Although the textbook contains no instances of presenting information that is cultural insensitive or offensive, it does not offer an culturally inclusive review of information pertaining to research methods in psychology. I found no inclusion of examples of research conducting with non European American samples included in the summary of studies. Likewise the authors do place much attention on the issue of cultural sensitivity when conducing research. If there is one major weakness of the textbook I would say it is in this area, but based on my experience it is not an uncommon characteristic of textbooks on research methods in psychology.

Reviewed by zehra peynircioglu, Professor, American University on 2/1/18

Short and sweet in most areas. Covers the basic concepts, not very comprehensively but definitely adequately so for a general beginning-level research methods course. For instance, I would liked to have seen a "separate" chapter on correlational... read more

Comprehensiveness rating: 3 see less

Short and sweet in most areas. Covers the basic concepts, not very comprehensively but definitely adequately so for a general beginning-level research methods course. For instance, I would liked to have seen a "separate" chapter on correlational research (there is one on single subject research and one on survey research), a discussion of the importance of providing a theoretical rationale for "getting an idea" (most students are fine with finding interesting and feasible project ideas but cannot give a theoretical rationale) before or after Chapter 4 on Theory, or a chapter on neuroscientific methods, which are becoming more and more popular. Nevertheless, it touches on most traditional areas that are in other books.

I did not find any errors or biases

This is one area where there is not much danger of going obsolete any time soon. The examples might need to be updated periodically (my students tend to not like dated materials, however relevant), but that should be easy.

Very clear and accessible prose. Despite the brevity, the concepts are put forth quite clearly. I like the "not much fluff" mentality. There is also adequate explanations of jargon and technical terminology.

I could not find any inconsistencies. The style and exposition frameworks are also quite consistent.

Yes, the modularity is fine. The chapters follow a logical pattern, so there should not be too much of a need for jumping around. And even if jumping around is needed depending on teaching style, the sections are solid in terms of being able to stand alone (or as an accompaniment to lectures).

Yes, the contents is ordered logically and the high modularity helps with any reorganization that an instructor may favor. In my case, for instance, Ch. 1 is fine, but I would skip it because it's mostly a repetition of what most introductory psychology books also say. I would also discuss non-experimental methods before going into experimental design. But such changes are easy to do, and if someone followed the book's own organization, there would also be a logical flow.

As far as I could see, the text is free of significant interface issues, at least in the pdf version

I could not find any errors.

As far as I could see, the book was culturally relevant.

I loved the short and sweet learning objectives, key takeaway sections, and the exercises. They are not overwhelming and can be used in class discussions, too.

Reviewed by George Woodbury, Graduate Student, Miami University, Ohio on 6/20/17

This text covers the typical areas for an undergraduate psychology course in research design. There is no table of contents included with the downloadable version, although there is a table of contents on the website (which excludes sub-sections... read more

This text covers the typical areas for an undergraduate psychology course in research design. There is no table of contents included with the downloadable version, although there is a table of contents on the website (which excludes sub-sections of chapters). The sections on statistics are not extensive enough to be useful in and of themselves, but they are useful for transitions to a follow-up statistics course. There does not seem to be a glossary of terms, which made it difficult at times for my read through and I assume later for students who decide to print the text. The text is comprehensive without being wordy or tedious.

Relatively minor errors; There does not seem to be explicit cultural or methodological bias in the text.

The content is up-to-date, and examples from the psychology literature are generally within the last 25 years. Barring extensive restructuring in the fundamentals of methodology and design in psychology, any updates will be very easy to implement.

Text will be very clear and easy to read for students fluent in English. There is little jargon/technical terminology used, and the vocabulary that is provided in the text is contemporary

There do not seem to be obvious shifts in the terminology or the framework. The text is internally consistent in that regard.

The text is well divided into chapter and subsections. Each chapter is relatively self-contained, so there are little issues with referring to past material that may have been skipped. The learning objectives at the beginning of the chapter are very useful. Blocks of text are well divided with headings.

As mentioned above, the topics of the text follow the well-established trajectory of undergraduate psychology courses. This makes it very logical and clear.

The lack of a good table of contents made it difficult to navigate the text for my read through. There were links to an outside photo-hosting website (flickr) for some of the stock photos, which contained the photos of the original creators of the photos. This may be distracting or confusing to readers. However, the hyperlinks in general helped with navigation with the PDF.

No more grammatical errors than a standard, edited textbook.

Very few examples explicitly include other races, ethnicities, or backgrounds, however the examples seem to intentionally avoid cultural bias. Overall, the writing seems to be appropriately focused on avoiding culturally insensitive or offensive content.

After having examined several textbooks on research design and methodology related to psychology, this book stands out as superior.

Reviewed by Angela Curl, Assistant Professor, Miami University (Ohio) on 6/20/17

"Research Methods in Psychology" covers most research method topics comprehensively. The author does an excellent job explaining main concepts. The chapter on causation is very detailed and well-written as well as the chapter on research ethics.... read more

"Research Methods in Psychology" covers most research method topics comprehensively. The author does an excellent job explaining main concepts. The chapter on causation is very detailed and well-written as well as the chapter on research ethics. However, the explanations of data analysis seem to address upper level students rather than beginners. For example, in the “Describing Statistical Relationships” chapter, the author does not give detailed enough explanations for key terms. A reader who is not versed in research terminology, in my opinion, would struggle to understand the process. While most topics are covered, there are some large gaps. For example, this textbook has very little content related to qualitative research methods (five pages).

The content appears to be accurate and unbias.

The majority of the content will not become obsolete within a short time period-- many of the information can be used for the coming years, as the information provided is, overall, general in nature. The notably exceptions are the content on APA Code of Ethics and the APA Publication Manual, which both rely heavily on outdated versions, which limits the usefulness of these sections. In addition, it would be helpful to incorporate research studies that have been published after 2011.

The majority of the text is clear, with content that is easy for undergraduate students to read and understand. The key points included in the chapters are helpful, but some chapters seem to be missing key points (i.e., the key points do not accurately represent the overall chapter).

The text seems to be internally consistent in its terminology and organization.

Each chapter is broken into subsections that can be used alone. For example, section 5.2 covers reliability and validity of measurement. This could be extremely helpful for educators to select specific content for assigned readings.

The topics are presented in a logical matter for the most part. However, the PDF version of the book does not include a table of contents, and none of the formats has a glossary or index. This can make it difficult to quickly navigate to specific topics or terms, especially when explanations do not appear where expected. For example, the definitions of independent and dependent variables is provided under the heading “Correlation Does Not Imply Causation” (p. 22).

The text is consistent but needs more visual representations throughout the book, rather than heavily in some chapters and none at all in other chapters. Similarly, the text within the chapters is not easily readable due to the large sections of text with little to no graphics or breaks.

The interface of the text is adequate. However, the formatting of the PDF is sometimes weak. For example, the textbook has a number of pages with large blank spaces and other pages are taken up with large photos or graphics. The number of pages (and cost of printing) could have been reduced, or more graphics added to maximize utility.

I found no grammatical errors.

Text appears to be culturally sensitive. I appreciated the inclusion of the content about avoiding biased language (chapter 11).

Instructors who adopt this book would likely benefit from either selecting certain chapters/modules and/or integrating multiple texts together to address the shortcomings of this text. Further, the sole focus on psychology limits the use of this textbook for introductory research methods for other disciplines (e.g., social work, sociology).

Reviewed by Pramit Nadpara, Assistant Professor, Virginia Commonwealth University on 4/11/17

The text book provides good information in certain areas, while not comprehensive information in other areas. The text provides practical information, especially the section on survey development was good. Additional information on sampling... read more

The text book provides good information in certain areas, while not comprehensive information in other areas. The text provides practical information, especially the section on survey development was good. Additional information on sampling strategies would have been beneficial for the readers.

There are no errors.

Research method is a common topic and the fundamentals of it will not change over the years. Therefore, the book is relevant and will not become obsolete.

Clarity rating: 3

The text in the book is clear. Certain aspects of the text could have been presented more clearly. For example, the section on main effects and interactions are some concepts that students may have difficulty understanding. Those areas could be explained more clearly with an example.

Consistency rating: 3

Graphs in the book lacks titles and variable names. Also, the format of chapter title page needs to be consistent.

At times there were related topics spread across several chapters. This could be corrected for a better read by the audience..

The book text is very clear, and the flow from one topic to the next was adequate. However, having a outline would help the reader.

The PDF copy of the book was a easy read. There were few links that were missing though.

There were no grammatical errors.

The text is not offensive and examples in it are mostly based on historical US based experiments.

I would start of by saying that I am a supporter of the Open Textbook concept. In this day and age, there are a variety of Research Methods book/text available on the market. While this book covers research methods basics, it cannot be recommended in its current form as an acceptable alternative to the standard text. Modifications to the text as recommended by myself and other reviewers might improve the quality of this book in the future.

Reviewed by Meghan Babcock, Instructor, University of Texas at Arlington on 4/11/17

This text includes all important areas that are featured in other Research Methods textbooks and are presented in a logical order. The text includes great examples and provides the references which can be assigned as supplemental readings. In... read more

This text includes all important areas that are featured in other Research Methods textbooks and are presented in a logical order. The text includes great examples and provides the references which can be assigned as supplemental readings. In addition, the chapters end with exercises that can be completed in class or as part of a laboratory assignment. This text would be a great addition to a Research Methods course or an Introductory Statistics course for Psychology majors.

The content is accurate. I did not find any errors and the material is unbiased.

Yes - the content is up to date and would be easy to update if/when necessary.

The text is written at an appropriate level for undergraduate students and explains important terminology. The research studies that the author references are ones that undergraduate psychology majors should be familiar with. The only section that was questionable to me was that on multiple regression in section 8.3 (Complex Correlational Designs). I am unaware of other introductory Research Methods textbooks that cover this analysis, especially without describing simple regression first.

The text is consistent in terms of terminology. The framework is also consistent - the chapters begin with Learning Objectives and ends with Key Takeaways and Exercises.

The text is divisible into smaller reading sections - possibly too many. The sections are brief, and in some instances too brief (e.g., the section on qualitative research). I think that the section headers are helpful for instructors who plan on using this text in conjunction with another text in their course.

The topics were presented in a logical fashion and are similar to other published Research Methods texts. The writing is very clear and great examples are provided. I think that some of the sections are rather brief and more information and examples could be provided.

I did not see any interface issues. All of the links worked properly and the tables and figures were accurate and free of errors. I particularly liked the figures in section 5.2 on reliability of measurement.

There are three comments that I have about the interface, however. First, I was expecting the keywords in blue font to be linked to a glossary, but they were not. I would have appreciated this feature. Second, I read this text as a PDF on an iPad and this version lacking was the Table of Contents (TOC) feature. Although I was able to view the TOC in different versions, I would have appreciated it in the PDF version. Also, it would be nice if the TOC was clickable (i.e., you could click on a section and it automatically directed you to that section). Third, I think the reader of this text would benefit from a glossary at the end of each chapter and/or an index at the end of the text. The "Key Takeaways" sections at the end of each chapter were helpful, but I think that a glossary would be a nice addition as well.

I did not notice any grammatical errors of any kind. The text was easy to read and I think that undergraduate students would agree.

The text was not insensitive or offensive to any races, ethnicities, or backgrounds. I appreciated the section on avoiding biased language when writing manuscripts (e.g., using 'children with learning disabilities' instead of 'special children' or using 'African American' instead of 'minority').

I think that this text would be a nice addition to a Research Methods & Statistics course in psychology. There are some sections that I found particularly helpful: (1) 2.2 and 2.3 - the author gives detailed information about generating research questions and reviewing the literature; (2) 9.2 - this section focuses on constructing survey questionnaires; (3) 11.2 and 11.3 - the author talks about writing a research report and about presenting at conferences. These sections will be great additions to an undergraduate Research Methods course. The brief introduction to APA style was also helpful, but should be supplemented with the most recent APA style manual.

Reviewed by Shannon Layman, Lecturer, University of Texas at Arlington on 4/11/17

The sections in this textbook are overall more brief than in previous Methods texts that I have used. Sometimes this brevity is helpful in terms of getting to the point of the text and moving on. In other cases, some topics could use a bit more... read more

The sections in this textbook are overall more brief than in previous Methods texts that I have used. Sometimes this brevity is helpful in terms of getting to the point of the text and moving on. In other cases, some topics could use a bit more detail to establish a better foundation of the content before moving on to examples and/or the next topic.

I did not find any incorrect information or gross language issues.

Basic statistical and/or methodological texts tend to stay current and up-to-date because the topics in this field have not changed over the decades. Any updated methodologies would be found in a more advanced methods text.

The text is very clear and the ideas are easy to follow/ presented in a logical manner. The most helpful thing about this textbook is that the author arrives at the point of the topic very quickly. Another helpful point about this textbook is the relevancy of the examples used. The examples appear to be accessible to a wide audience and do not require specialization or previous knowledge of other fields of psychology.

I feel this text is very consistent throughout. The ideas build on each other and no terms are discussed in later chapters without being established in previous chapters.

Each chapter had multiple subsections which would allow for smaller reading sections throughout the course. The amount of content in each section and chapter appeared to be less than what I have encountered in other Methods texts.

The organization of the topics in this textbook follows the same or similar organization that I see in other textbooks. As I mentioned previously, the ideas build very well throughout the text.

I did not find any issues with navigation or distortion of the figures in the text.

There were not any obvious and/or egregious grammatical errors that I encountered in this text.

This topic is not really an issue with a Methods textbook as the topics are more so conceptual as opposed to topical. That being said, I did not see an issue with any examples used.

I have no other comments than what I addressed previously.

Reviewed by Sarah Allred, Associate Professor, Rutgers University, Camden on 2/8/17

Mixed. For some topics, there is more (and more practical) information than in most textbooks. I appreciated the very practical advice to students about how to plot data (in statistics chapters). Similarly, there is practical advice about how... read more

Mixed. For some topics, there is more (and more practical) information than in most textbooks. I appreciated the very practical advice to students about how to plot data (in statistics chapters). Similarly, there is practical advice about how to comply with ethical guidelines. The section on item development in surveys was very good.

On the other hand, there is far too little information about some subjects. For example, independent and dependent variables are introduced in passing in an early chapter and then referred to only much later in the text. In my experience, students have a surprisingly difficult time grasping this concept. Another important example is sampling; I would have preferred much more information on types of samples and sampling techniques, and the problems that arise from poor sampling. A third example is the introduction to basic experimental design. Variables, measurement, validity, and reliability are all introduced in one chapter.

I did not see an index or glossary.

I found no errors.

The fundamentals of research methods do not change much. Given the current replication crisis in psychology, it might be helpful to have something about replicability.

Mixed. The text itself is spare and clear. The style of the book is to explain a concept in very few words. There are some excellent aspects of this, but on the other hand, there are some concepts that students have a very difficult time undersatnding if they are not embedded in concrete examples. For example, the section on main effects and interactions shows bar graphs of interactions, but this is presented without variable names or axis titles, and separate from any specific experiment.

Sometimes the chapter stucture is laid out on the title page, and other times it is not. Some graphs lack titles and variable names.

The chapters can be stand alone, but sometimes I found conceptually similar pieces spread across several chapters, and conceptually different pieces in the same chapters.

The individual sentences and paragraphs are always very clear. However, I felt that more tables/outlines of major concepts would have been helpful. For example, perhaps a flow chart of different kinds of experimental designs would be useful. (See section on comprehensiveness for more about organization).

The flow from one topic to the next was adequate.

I read the pdf. Perhaps the interface is more pleasant on other devices, but I found the different formats and fonts in image/captions/main text/figure labels distracting. Many if the instances of apparently hyperlinked (blue) text to do not link to anything.

I found no grammatical errors, and prose is standard academic English.

Like most psychology textbooks available in the US, examples are focused on important experiments in US history.

I really wanted to be happy with this text. I am a supporter of the Open Textbook concept, and I wanted to find this book an acceptable alternative to the variety of Research Methods texts I’ve used. Unfortunately, I cannot recommend this book as superior in quality.

Reviewed by Joel Malin, Assistant Professor, Miami University on 8/21/16

This textbook covers all or nearly all of what I believe are important topics to provide an introduction to research methods in psychology. One minor issue is that the pdf version, which I reviewed, does not include an index or a glossary. As... read more

This textbook covers all or nearly all of what I believe are important topics to provide an introduction to research methods in psychology. One minor issue is that the pdf version, which I reviewed, does not include an index or a glossary. As such, it may be difficult for readers to zero in on material that they need, and/or to get a full sense of what will be covered and in what order.

I did not notice errors.

The book provides a solid overview of key issues related to introductory research methods, many of which are nearly timeless.

The writing is clear and accessible. It was easy and pleasing to read.

Terms are clearly defined and build upon each other as the book progresses.

I believe the text is organized in such a way that it could be easily divided into smaller sections.

The order in which material is presented seems to be well thought out and sensible.

I did not notice any issues with the interface. I reviewed the pdf version and thought the images were very helpful.

The book is written in a culturally relevant manner.

Reviewed by Abbey Dvorak, Assistant Professor, University of Kansas on 8/21/16

The text includes basic, essential information needed for students in an introductory research methods course. In addition, the text includes three chapters (i.e., research ethics, theory, and APA style) that are typically absent from or... read more

The text includes basic, essential information needed for students in an introductory research methods course. In addition, the text includes three chapters (i.e., research ethics, theory, and APA style) that are typically absent from or inadequately covered in similar texts. However, I did have some areas of concern regarding the coverage of qualitative and mixed methods approaches, and nonparametric tests. Although the author advocates for the research question to guide the choice of approach and design, minimal attention is given to the various qualitative designs (e.g., phenomenology, narrative, participatory action, etc.) beyond grounded theory and case studies, with no mention of the different types of mixed methods designs (e.g., concurrent, explanatory, exploratory) that are prevalent today. In addition, common nonparametric tests (e.g., Wilcoxon, Mann-Whitney, etc.) and parametric tests for categorical data (e.g., chi-square, Fisher’s exact, etc.) are not mentioned.

The text overall is accurate and free of errors. I noticed in the qualitative research sub-section, the author describes qualitative research in general, but does not mention common practices associated with qualitative research, such as transcribing interviews, coding data (e.g., different approaches to coding, different types of codes), and data analysis procedures. The information that is included appears accurate.

The text appears up-to-date and includes basic research information and classic examples that rarely change, which may allow the text to be used for many years. However, the author may want to add information about mixed methods research, a growing research approach, in order for the text to stay relevant across time.

The text includes clear, accessible, straightforward language with minimal jargon. When the author introduces a new term, the term is immediately defined and described. The author also provides interesting examples to clarify and expand understanding of terms and concepts throughout the text.

The text is internally consistent and uses similar language and vocabulary throughout. The author uses real-life examples across chapters in order to provide depth and insight into the information. In addition, the vocabulary, concepts, and organization are consistent with other research methods textbooks.

The modules are short, concise, and manageable for students; the material within each module is logically focused and related to each other. I may move the modules and the sub-topics within them into a slightly different order for my class, and add the information mentioned above, but overall, this is very good.

The author presents topics and structures chapters in a logical and organized manner. The epub and online version do not include page numbers in the text, but the pdf does; this may be confusing when referencing the text or answering student questions. The book ends somewhat abruptly after the chapter on inferential statistics; the text may benefit from a concluding chapter to bring everything together, perhaps with a culminating example that walks the reader through creating the research question, choosing a research approach/design, etc., all the way to writing the research report.

I used and compared the pdf, epub, and online versions of the text. The epub and online versions include a clickable table of contents, but the pdf does not. The table format is inconsistent across the three versions; in the epub version (viewed through ibooks), the table data does not always line up correctly, making it difficult to interpret quickly. In the pdf and online versions, the table format looks different, but the data are lined up. No index made it difficult to quickly find areas of interest in the text; however, I could use the Find/Search functions in all three versions to search and find needed items.

As I read through this text, I did not detect any glaring grammatical errors. Overall, I think the text is written quite well in a style that is accessible to students.

The author uses inclusive, person-first language, and the text does not seem to be offensive or insensitive. As I read, I did notice that topics such as diversity and cultural competency are absent.

I enjoyed reading this text and am very excited to have a free research methods text for my students that I may supplement as needed. I wish there was a test question bank and/or flashcards for my students to help them study, but perhaps that could be added in the future. Overall, this is a great resource!

Reviewed by Karen Pikula, Psychology Instructor PhD, Central Lakes College on 1/7/16

The text covers all the areas and ideas of the subject of research methods in psychology for the learner that is just entering the field. The authors cover all of the content of an introductory research methods textbook and use exemplary examples... read more

The text covers all the areas and ideas of the subject of research methods in psychology for the learner that is just entering the field. The authors cover all of the content of an introductory research methods textbook and use exemplary examples that make those concepts relevent to a beginning researcher. As the authors state, the material is presented in such a manner as to encourage learners to not only be effective consumers of current research but also engage as critical thinkers in the many diverse situations one encounters in everyday life.

The content is accurate, error free, and unbiased. It explains both quantiative and qualitative methods in an unbiased manner. It is a bit slim on qualitative. It would be nice to have a bit more information on, for example, creating interview questions, coding, and qualitative data anaylisis.

The text is up to date, having just been revised. This revision was authored by Rajiv Jhangiani (Capilano University, North Vancouver) and includes the addition of a table of contents and cover page that the original text did not have, changes to Chapter 3 (Research Ethics) to include a contemporary example of an ethical breach and to reflect Canadian ethical guidelines and privacy laws, additional information regarding online data collection in Chapter 9 (Survey Research). Jhangiani has correcte of errors in the text and formulae, as well as changing spelling from US to Canadian conventions. The text is also now available in a inexpensive hard copy which students can purchase online or college bookstores can stock. This makes the text current and updates should be minimal.

The text is very easy to read and also very interesting as the authors supplement content with amazing real life examples.

The text is internally consistent in terms of terminology and framework.

This text is easily and readily divisible into smaller reading sections that can be assigned at different points within a course. I am going to use this text in conjunction with the OER OpenStax Psychology text for my Honors Psychology course. I currently use the OER Openstax Psychology textbook for my Positive Psychology course as well as my General Psychology course,

The topics in the text are presented in logical and clear fashion. The way they are presented allows the text to be used in conjuction with other textbooks as a secondary resource.

The text is free of significant interface issues. It is written in a manner that follows the natural process of doing research.

The text contained no noted grammatical errors.

The text is not culturally insensitive or offensive and actually has been revised to accomodate Canadian ethical guidelines as well as those of the APA.

I have to say that I am excited to have found this revised edition. My students will be so happy that there is also a reasonable priced hard coopy for them to purchase. They love the OpenStax Psychology text with the hard copy available from our bookstore. I do wish there were PowerPoints available for the text as well as a test bank. That is always a bonus!

Reviewed by Alyssa Gibbons, Instructor, Colorado State University on 1/7/16

This text covers everything I would consider essential for a first course in research methods, including some areas that are not consistently found in introductory texts (e.g., qualitative research, criticisms of null hypothesis significance... read more

This text covers everything I would consider essential for a first course in research methods, including some areas that are not consistently found in introductory texts (e.g., qualitative research, criticisms of null hypothesis significance testing). The chapters on ethics (Ch. 3) and theory (Ch. 4) are more comprehensive than most I have seen at this level, but not to the extent of information overload; rather, they anticipate and address many questions that undergraduates often have about these issues.

There is no index or table of contents provided in the PDF, and the table of contents on the website is very broad, but the material is well organized and it would not be hard for an instructor to create such a table. Chapter 2.1 is intended to be an introduction to several key terms and ideas (e.g., variable, correlation) that could serve as a sort of glossary.

I found the text to be highly accurate throughout; terms are defined precisely and correctly.

Where there are controversies or differences of opinion in the field, the author presents both sides of the argument in a respectful and unbiased manner. He explicitly discourages students from dismissing any one approach as inherently flawed, discussing not only the advantages and disadvantages of all methods (including nonexperimental ones) but also ways researchers address the disadvantages.

In several places, the textbook explicitly addresses the history and development of various methods (e.g., qualitative research, null hypothesis significance testing) and the ways in which researchers' views have changed. This allows the author to present current thinking and debate in these areas yet still expose students to older ideas they are likely to encounter as they read the research literature. I think this approach sets students up well to encounter future methodological advances; as a field, we refine our methods over time. I think the author could easily integrate new developments in future editions, or instructors could introduce such developments as supplementary material without creating confusion by contradicting the test.

The examples are generally drawn from classic psychological studies that have held up well over time; I think they will appeal to students for some time to come and not appear dated.

The only area in which I did not feel the content was entirely up to date was in the area of psychological measurement; Chapter 5.2 is based on the traditional view and not the more comprehensive modern or holistic view as presented in the 1999 AERA/APA Standards for Educational and Psychological Measurement. However, a comprehensive treatment of measurement validity is probably not necessary for most undergraduates at this stage, and they will certainly encounter the older framework in the research literature.

The textbook does an excellent job of presenting concepts in simple, accessible language without introducing error by oversimplification. The author consistently anticipates common points of confusion, clarifies terms, and even suggests ways for students to remember key distinctions. Terms are clearly and concretely defined when they are introduced. In contrast to many texts I have used, the terms that are highlighted in the text are actually the terms I would want my students to remember and study; the author refrains from using psychological jargon that is not central to the concepts he is discussing.

I noticed no major inconsistencies or gaps.

The division of sections within each chapter is useful; although I liked the overall organization of the text, there were points at which I would likely assign sections in a slightly different order and I felt I could do this easily without loss of continuity. The one place I would have liked more modularity was in the discussion of inferential statistics: t-tests, ANOVA, and Pearson's r are all covered within Chapter 13.2. On the one hand, this enables students to see the relationships and similarities among these tests, but on the other, this is a lot for students to take in at once.

I found the overall organization of the book to be quite logical, mirroring the sequence of steps a researcher would use to develop a research question, design a study, etc. As discussed above, the modularity of the book makes it easy to reorder sections to suit the structure of a particular class (for example, I might have students read the section on APA writing earlier in the semester as they begin drafting their own research proposals). I like the inclusion of ethics very early on in the text, establishing the importance of this topic for all research design choices.

One organizational feature I particularly appreciated was the consistent integration of conceptual and practical ideas; for example, in the discussion of psychological measurement, reliability and validity are discussed alongside the importance of giving clear instructions and making sure participants cannot be identified by their writing implements. This gives students an accurate and honest picture of the research process - some of the choices we make are driven by scientific ideals and some are driven by practical lessons learned. Students often have questions related to these mundane aspects of conducting research and it is helpful to have them so clearly addressed.

Although I didn't encounter any problems per se with the interface, I do think it could be made more user-friendly. For example, references to figures and tables are highlighted in blue, appearing to be hyperlinks, but they were not. Having such links, as well as a linked, easily-navigable and detailed table of contents, would also be helpful (and useful to students who use assistive technology).

I noticed no grammatical errors.

Where necessary, the author uses inclusive language and there is nothing that seems clearly offensive. The examples generally reflect American psychology research, but the focus is on the methods used and not the participants or cultural context. The text could be more intentionally or proactively inclusive, but it is not insensitive or exclusive.

I am generally hard to please when it comes to textbooks, but I found very little to quibble with in this one. It is a very well-written and accessible introduction to research methods that meets students where they are, addressing their common questions, misconceptions, and concerns. Although it's not flashy, the figures, graphics, and extra resources provided are clear, helpful, and relevant.

Reviewed by Moin Syed, Assistant Professor, University of Minnesota on 6/10/15

The text is thorough in terms of covering introductory concepts that are central to experimental and correlational/association designs. I find the general exclusion of qualitative and mixed methods designs hard to defend (despite some researchers’... read more

The text is thorough in terms of covering introductory concepts that are central to experimental and correlational/association designs. I find the general exclusion of qualitative and mixed methods designs hard to defend (despite some researchers’ distaste for the methods). While these approaches were less commonly used in the recent past, they are prevalent in the early years of psychology and are ascending once again. It strikes me as odd to just ignore two whole families of methods that are used within the practice of psychology—definitely not a sustainable approach.

I do very much appreciate the emphasis on those who will both practice and consume psychology, given the wide variety of undergraduate career paths.

One glaring omission is a Table of Contents within the PDF. It would be nice to make this a linked PDF, so that clicking on the entry in a TOC (or cross-references) would jump the reader to the relevant section.

I did not see an errors. The chapter on theory is not as clear as it could be. The section “what is theory” is not very clear, and these are difficulte concepts (difference between theory, hypothesis, etc.). A bit more time spent here could have been good. Also, the discussion of functional, mechanistic, and typological theories leaves out the fourth of Pepper’s metaphors: contextualism. I’m not sure that was intentional and accidental, but it is noticeable!

This is a research methods text focused on experimental and association designs. The basics of these designs do not change a whole lot over time, so there is little likelihood that the main content will become obsolete anytime soon. Some of the examples used are a bit dated, but then again most of them are considered “classics” in the field, which I think are important to retain (and there is at least one “new classic” included in the ethics section, namely the fraudulent research linking autism to the MMR vaccine).

The text is extremely clear and accessible. In fact, it may even be *too* simple for undergraduate use. Then again, students often struggle with methods, so simplicity is good, and the simplicity can also make the book marketable to high school courses (although I doubt many high schools have methods courses).

Yes, quite consistent throughout. Carrying through the same examples into different chapters is a major strength of the text.

I don’ anticipate any problems here.

The book flows well, with brief sections. I do wonder if maybe the sections are too brief? Perhaps too many check-ins? The “key take-aways” usually come after only a few pages. As mentioned above, the book is written at a very basic level, so this brevity is consistent with that approach. It is not a problem, per se, but those considering adopting the text should be aware of this aspect.

No problems here.

I did not detect any grammatical errors. The text flows very well.

The book is fairly typical of American research methods books in that it only focuses on the U.S. context and draws its examples from “mainstream” psychology (e.g., little inclusion of ethnic minority or cross-cultural psychology). However, the text is certainly not insensitive or offensive in any way.

Nice book, thanks for writing it!

Reviewed by Rajiv Jhangiani, Instructor, Capilano University on 10/9/13

The text is well organized and written, integrates excellent pedagogical features, and covers all of the traditional areas of the topic admirably. The final two chapters provide a good bridge between the research methods course and the follow-up... read more

The text is well organized and written, integrates excellent pedagogical features, and covers all of the traditional areas of the topic admirably. The final two chapters provide a good bridge between the research methods course and the follow-up course on behavioural statistics. The text integrates real psychological measures, harnesses students' existing knowledge from introductory psychology, includes well-chosen examples from real life and research, and even includes a very practical chapter on the use of APA style for writing and referencing. On the other hand, it does not include a table of contents or an index, both of which are highly desirable. The one chapter that requires significant revision is Chapter 3 (Research Ethics), which is based on the US codes of ethics (e.g., Federal policy & APA code) and does not include any mention of the Canadian Tri-Council Policy Statement.

The very few errors I found include the following: 1. The text should read "The fact that his F score…" instead of "The fact that his t score…" on page 364 2. Some formulae are missing the line that separates the numerator from the denominator. See pages 306, 311, 315, and 361 3. Table 12.3 on page 310 lists the variance as 288 when it is 28.8

The text is up-to-date and will not soon lose relevance. The only things I would add are a brief discussion of the contemporary case of Diederik Stapel's research fraud in the chapter on Research Ethics, as well as some research concerning the external validity of web-based studies (e.g., Gosling et al.'s 2004 article in American Psychologist).

Overall, the style of writing makes this text highly accessible. The writing flows well, is well organized, and includes excellent, detailed, and clear examples and explanations for concepts. The examples often build on concepts or theories students would have covered in their introductory psychology course. Some constructive criticism: 1. When discussing z scores on page 311 it might have been helpful to point out that the mean and SD for a set of calculated z scores are 0 and 1 respectively. Good students will come to this realization themselves, but it is not a bad thing to point it out nonetheless. 2. The introduction of the concept of multiple regression might be difficult for some students to grasp. 3. The only place where I felt short of an explanation was in the use of a research example to demonstrate the use of a line graph on page 318. In this case the explanation in question does not pertain to the line graph itself but the result of the study used, which is so fascinating that students will wish for the researchers' explanation for it.

The text is internally consistent.

The text is organized very well into chapters, modules within each chapter, and learning objectives within each module. Each module also includes useful exercises that help consolidate learning.

As mentioned earlier, the style of writing makes this text highly accessible. The writing flows well, is well organized, and includes excellent, detailed, and clear examples and explanations for concepts. The examples often build on concepts or theories students would have covered in their introductory psychology course. Only rarely did I feel that the author could have assisted the student by demonstrating the set-by-step calculation of a statistic (e.g., on page 322 for the calculation of Pearson's r)

The images, graphs, and charts are clear. The only serious issues that hamper navigation are the lack of a table of contents and an index. Many of the graphs will need to be printed in colour (or otherwise modified) for the students to follow the explanations provided in the text.

The text is written rather well and is free from grammatical errors. Of course, spellings are in the US convention.

The text is not culturally insensitive or offensive. Of course, it is not a Canadian edition and so many of the examples (all of which are easy to comprehend) come from a US context.

I have covered most of these issues in my earlier comments. The only things left to mention are that the author should have clearly distinguished between mundane and psychological realism, and that, in my opinion, the threats to internal validity could have been grouped together and might have been closer to an exhaustive list. This review originated in the BC Open Textbook Collection and is licensed under CC BY-ND.

Table of Contents

  • Chapter 1: The Science of Psychology
  • Chapter 2: Overview of the Scientific Method
  • Chapter 3: Research Ethics
  • Chapter 4: Psychological Measurement
  • Chapter 5: Experimental Research
  • Chapter 6: Non-experimental Research
  • Chapter 7: Survey Research
  • Chapter 8: Quasi-Experimental Research
  • Chapter 9: Factorial Designs
  • Chapter 10: Single-Subject Research
  • Chapter 11: Presenting Your Research
  • Chapter 12: Descriptive Statistics
  • Chapter 13: Inferential Statistics

Ancillary Material

  • Kwantlen Polytechnic University

About the Book

This fourth edition (published in 2019) was co-authored by Rajiv S. Jhangiani (Kwantlen Polytechnic University), Carrie Cuttler (Washington State University), and Dana C. Leighton (Texas A&M University—Texarkana) and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Revisions throughout the current edition include changing the chapter and section numbering system to better accommodate adaptions that remove or reorder chapters; continued reversion from the Canadian edition; general grammatical edits; replacement of “he/she” to “they” and “his/her” to “their”; removal or update of dead links; embedded videos that were not embedded; moved key takeaways and exercises from the end of each chapter section to the end of each chapter; a new cover design.

About the Contributors

Dr. Carrie Cuttler received her Ph.D. in Psychology from the University of British Columbia. She has been teaching research methods and statistics for over a decade. She is currently an Assistant Professor in the Department of Psychology at Washington State University, where she primarily studies the acute and chronic effects of cannabis on cognition, mental health, and physical health. Dr. Cuttler was also an OER Research Fellow with the Center for Open Education and she conducts research on open educational resources. She has over 50 publications including the following two published books:  A Student Guide for SPSS (1st and 2nd edition)  and  Research Methods in Psychology: Student Lab Guide.  Finally, she edited another OER entitled  Essentials of Abnormal Psychology. In her spare time, she likes to travel, hike, bike, run, and watch movies with her husband and son. You can find her online at @carriecuttler or carriecuttler.com.

Dr. Rajiv Jhangiani is the Associate Vice Provost, Open Education at Kwantlen Polytechnic University in British Columbia. He is an internationally known advocate for open education whose research and practice focuses on open educational resources, student-centered pedagogies, and the scholarship of teaching and learning. Rajiv is a co-founder of the Open Pedagogy Notebook, an Ambassador for the Center for Open Science, and serves on the BC Open Education Advisory Committee. He formerly served as an Open Education Advisor and Senior Open Education Research & Advocacy Fellow with BCcampus, an OER Research Fellow with the Open Education Group, a Faculty Workshop Facilitator with the Open Textbook Network, and a Faculty Fellow with the BC Open Textbook Project. A co-author of three open textbooks in Psychology, his most recent book is  Open: The Philosophy and Practices that are Revolutionizing Education and Science (2017). You can find him online at @thatpsychprof or thatpsychprof.com.

Dr. Dana C. Leighton is Assistant Professor of Psychology in the College of Arts, Science, and Education at Texas A&M University—Texarkana. He earned his Ph.D. from the University of Arkansas, and has 15 years experience teaching across the psychology curriculum at community colleges, liberal arts colleges, and research universities. Dr. Leighton’s social psychology research lab studies intergroup relations, and routinely includes undergraduate students as researchers. He is also Chair of the university’s Institutional Review Board. Recently he has been researching and writing about the use of open science research practices by undergraduate researchers to increase diversity, justice, and sustainability in psychological science. He has published on his teaching methods in eBooks from the Society for the Teaching of Psychology, presented his methods at regional and national conferences, and received grants to develop new teaching methods. His teaching interests are in undergraduate research, writing skills, and online student engagement. For more about Dr. Leighton see http://www.danaleighton.net and http://danaleighton.edublogs.org

Contribute to this Page

The newest release in the APA Handbooks in Psychology ® series

750 First Street NE Washington, DC 20002 www.apa.org | [email protected]

Terms of Use | Privacy Statement ©2023 American Psychological Association. All Rights Reserved.

APA Handbook of Research Methods in Psychology

Please select the collection(s) you would like to receive more information on.

A one-time purchase of any collection provides perpetual access to DRM-free titles to best meet the needs of your institution and users.

Table of Contents

Volume 1 — Foundations, Planning, Measures, and Psychometrics

Part I. Philosophical, Ethical, and Societal Underpinnings of Psychological Research (Chapters 1 – 6) Part II. Planning Research (Chapters 7 – 12) Part III. Measurement Methods (Chapters 13 – 32) Part IV. Psychometrics (Chapters 33 – 38)

Volume 2 — Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological

Part I. Qualitative Research Methods (Chapters 1 – 11) Part II. Working Across Epistemologies, Methodologies, and Methods (Chapters 12 – 15) Part III. Sampling Across People and Time (Chapters 16 – 19) Part IV. Building and Testing Methods (Chapters 20 – 26) Part V. Designs Involving Experimental Manipulations (Chapters 27 – 32) Part VI. Quantitative Research Designs Involving Single Participants or Units (Chapters 33 – 34) Part VII. Designs in Neuropsychology and Biological Psychology (Chapters 35 – 38)

Volume 3 — Data Analysis and Research Publication

Part I. Quantitative Data Analysis (Chapters 1 – 24) Part II. Publishing and the Publication Process (Chapters 25 – 27)

This resource serves as an ideal reference for many different courses, including:

Please complete the form and an APA representative will follow up with access options.

By submitting your information, you agree to receive information about American Psychological Association (APA) products and services. You may unsubscribe at any time. Please review the APA Privacy Policy for more information.

With significant new and updated content across dozens of chapters, the second edition of the APA Handbook of Research Methods in Psychology presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do. Across three volumes, the chapters in this indispensable handbook address broad, crosscutting issues faced by researchers: the philosophical, ethical, and societal underpinnings of psychological research. Newly written chapters cover topics such as:

  • Literature searching
  • Workflow and reproducibility
  • Research funding
  • Neuroimaging
  • Data analysis methods
  • Navigating the publishing process
  • Ethics in scholarly authorship
  • Research data management and sharing
  • Applied Psychology 
  • Clinical Psychology
  • Cognitive Psychology
  • Developmental Psychology
  • Education Psychology
  • Human Development
  • Neuroscience
  • Public health

Harris Cooper, 

Duke University

Marc N. Coutanche, 

University of Pittsburgh

Linda M. McMullen, 

University of Saskatchewan (Canada) A.T. Panter, 

University of North Carolina 

at Chapel Hill ISBN: 978-1-4338-4123-1

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

A woman looking into repeating mirrors. Her shirt gradually changes color over the reflections.

The process and mechanisms of personality change

  • Joshua J. Jackson
  • Amanda J. Wright

psychology research methods review

Determinants of behaviour and their efficacy as targets of behavioural change interventions

Changing behaviours might be central to responding to societal issues such as climate change and pandemics. In this Review, Albarracín et al. synthesize meta-analyses of individual and social-structural determinants of behaviour and the efficacy of behavioural change interventions that target them across domains to identify general principles that can inform future intervention decisions.

  • Dolores Albarracín
  • Bita Fayaz-Farkhad
  • Javier A. Granados Samayoa

psychology research methods review

Mechanisms linking social media use to adolescent mental health vulnerability

Declines in adolescent mental health over the past decade have been attributed to social media, but the empirical evidence is mixed. In this Review, Orben et al. describe the mechanisms by which social media could amplify the developmental changes that increase adolescents’ mental health vulnerability.

  • Adrian Meier
  • Sarah-Jayne Blakemore

psychology research methods review

Using large language models in psychology

Large language models (LLMs), which can generate and score text in human-like ways, have the potential to advance psychological measurement, experimentation and practice. In this Perspective, Demszky and colleagues describe how LLMs work, concerns about using them for psychological purposes, and how these concerns might be addressed.

  • Dorottya Demszky
  • James W. Pennebaker

Current issue

Using virtual reality to understand mechanisms of therapeutic change.

  • Sigal Zilcha-Mano
  • Tal Krasovsky

Sampling decisions in developmental psychology

  • Katherine McAuliffe

From the lab to a career in graduate education

  • Teresa Schubert

The development of human causal learning and reasoning

  • Mariel K. Goddu
  • Alison Gopnik

Optimizing work and off-job motivation through proactive recovery strategies

  • Miika Kujanpää
  • Anja H. Olafsen

Volume 3 Issue 5

Announcements

Sign up for e-alerts.

Click the link above to sign up for free monthly E-Alerts

Meet the editors

Interested in meeting our editors in a virtual lab or site visit? Click the link above to find out more.

People climbing the 'ivory tower'

Supporting the next generation of psychologists

This collection highlights ideas, recommendations, and personal stories that aim to improve graduate education, support trainees to their fullest potential, and demystify non-academic career paths. Updated with new content regularly.

collage of Nature Reviews Psychology covers

First anniversary collection

To the celebrate the first anniversary of Nature Reviews Psychology we assembled a Collection showcasing articles that inspired our first 12 covers.

Advertisement

Advertisement

Advertisement

Latest Reviews & Analysis

psychology research methods review

Theoretical and empirical advances in understanding musical rhythm, beat and metre

Rhythmic elements including beat and metre are integral to human experiences of music. In this Review, Snyder and colleagues discuss leading theories of rhythm perception and synthesize relevant behavioural, neural and genetic findings.

  • Joel S. Snyder
  • Reyna L. Gordon
  • Erin E. Hannon

psychology research methods review

Stability and malleability of emotional autobiographical memories

Emotional memories can be vivid and detailed but are prone to change over time. In this Review, Wardell and Palombo detail the malleability of emotional autobiographical memories, the role of narrative and the use of these memories in future thinking.

  • Victoria Wardell
  • Daniela J. Palombo

psychology research methods review

Humans have a unique capacity for objective and general causal understanding. In this Review, Goddu and Gopnik describe the development of causal learning and reasoning abilities during evolution and across childhood.

Understanding the development of reward learning through the lens of meta-learning

  • Kate Nussenbaum
  • Catherine A. Hartley

Continuity fields enhance visual perception through positive serial dependence

  • Mauro Manassi
  • David Whitney

Uniquely human intelligence arose from expanded information capacity

  • Jessica F. Cantlon
  • Steven T. Piantadosi

News & Comment

psychology research methods review

From the lab to a career in behaviour change

Nature Reviews Psychology is interviewing individuals with doctoral degrees in psychology who pursued non-academic careers. We spoke with Erik Simmons about his journey from a postdoctoral research fellow to a behavioural designer.

Shaping vision through drawing

  • Kushin Mukherjee

Scholar activism benefits science and society

An artificial boundary is often drawn between research and activism, but scholar activism can be good for science and for society when it centres the needs of people who are multiply marginalized — especially during the current climate crisis.

  • José M. Causadias
  • Leoandra Onnie Rogers
  • Tiffany Yip

Situational models of implicit bias

  • Maximilian A. Primbs

Contemplating cancer screening

  • Nathan J. Harrison

Mapping the claustrum to elucidate consciousness

  • Navona Calarco

Trending - Altmetric

Score 208

Buffering and spillover of adult attachment insecurity in couple and family relationships

Score 31

This journal is a member of and subscribes to the principles of the Committee on Publication Ethics.

Ithenticate Plagiarism Detection

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

psychology research methods review

pep

Find what you need to study

1.2 Research Methods in Psychology

4 min read • january 5, 2023

Sadiyya Holsey

Sadiyya Holsey

Jillian Holbrook

Jillian Holbrook

Dalia Savy

Overview of Research Methods

There are various types of research methods in psychology with different purposes, strengths, and weaknesses.

Whenever researchers want to prove or find causation, they would run an experiment.

An experiment you'll learn about in Unit 9 that was run by Solomon Asch investigated the extent to which one would conform to a group's ideas.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-gmyi9x4xm2E0.png?alt=media&token=4c3a51ca-d335-4b71-9695-f40e5792944e

Image Courtesy of Wikipedia .

Each person in the room would have to look at these lines above and state which one they thought was of similar length to the original line. The answer was, of course, obvious, but Asch wanted to see if the "real participant" would conform to the views of the rest of the group.

Asch gathered together what we could call "fake participants" and told them not to say line C. The "real participant" would then hear wrong answers, but they did not want to be the odd one out, so they conformed with the rest of the group and represented the majority view.

In this experiment, the "real participant" was the control group , and about 75% of them, over 12 trials, conformed at least once.

Correlational Study

There could be a correlational study between anything. Say you wanted to see if there was an association between the number of hours a teenager sleeps and their grades in high school. If there was a correlation, we cannot say that sleeping a greater number of hours causes higher grades. However, we can determine that they are related to each other. 💤

Remember in psychology that a correlation does not prove causation!

Survey Research

Surveys are used all the time, especially in advertising and marketing. They are often distributed to a large number of people, and the results are returned back to researchers.

Naturalistic Observation

If a student wanted to observe how many people fully stop at a stop sign, they could watch the cars from a distance and record their data. This is a naturalistic observation since the student is in no way influencing the results.

A notable psychological case study is the study of Phineas Gage :

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-FwUzFzvozUGZ.jpg?alt=media&token=dc72d283-f561-4fe9-8364-8c03cbb7112c

Image Courtesy of Vermont Journal

Phineas Gage was a railroad construction foreman who survived a severe brain injury in 1848. The accident occurred when an iron rod was accidentally driven through Gage's skull, damaging his frontal lobes . Despite the severity of the injury, Gage was able to walk and talk immediately after the accident and appeared to be relatively uninjured.

However, Gage's personality underwent a dramatic change following the injury. He became impulsive, irresponsible, and prone to outbursts of anger, which were completely out of character for him before the accident. Gage's case is famous in the history of psychology because it was one of the first to suggest that damage to the frontal lobes of the brain can have significant effects on personality and behavior.

Key Terms to Review ( 27 )

Association

Case Studies

Cause and Effect

Control Group

Correlational Studies

Cross-Sectional Studies

Cross-Sectional Study

Ethical Issues

Experiments

Frontal Lobes

Generalize Results

Hawthorne Effect

Human Development Stages

Independent Variables

Longitudinal Studies

Naturalistic Observations

Personality Change

Phineas Gage

Research Methods

Response Rates

School Grades

Solomon Asch

Fiveable

Stay Connected

© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Introduction to Research Methods in Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

psychology research methods review

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

psychology research methods review

There are several different research methods in psychology , each of which can help researchers learn more about the way people think, feel, and behave. If you're a psychology student or just want to know the types of research in psychology, here are the main ones as well as how they work.

Three Main Types of Research in Psychology

stevecoleimages/Getty Images

Psychology research can usually be classified as one of three major types.

1. Causal or Experimental Research

When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change.

An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants.

2. Descriptive Research

Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are:

  • Case studies
  • Observational studies

An example of this psychology research method would be an opinion poll to determine which presidential candidate people plan to vote for in the next election. Descriptive studies don't try to measure the effect of a variable; they seek only to describe it.

3. Relational or Correlational Research

A study that investigates the connection between two or more variables is considered relational research. The variables compared are generally already present in the group or population.

For example, a study that looks at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference.

Theory vs. Hypothesis in Psychology Research

People often confuse the terms theory and hypothesis or are not quite sure of the distinctions between the two concepts. If you're a psychology student, it's essential to understand what each term means, how they differ, and how they're used in psychology research.

A theory is a well-established principle that has been developed to explain some aspect of the natural world. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted.

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research.

While the terms are sometimes used interchangeably in everyday use, the difference between a theory and a hypothesis is important when studying experimental design.

Some other important distinctions to note include:

  • A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

The Effect of Time on Research Methods in Psychology

There are two types of time dimensions that can be used in designing a research study:

  • Cross-sectional research takes place at a single point in time. All tests, measures, or variables are administered to participants on one occasion. This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.
  • Longitudinal research is a study that takes place over a period of time. Data is first collected at the beginning of the study, and may then be gathered repeatedly throughout the length of the study. Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of months, years, or even decades.

The effects of aging are often investigated using longitudinal research.

Causal Relationships Between Psychology Research Variables

What do we mean when we talk about a “relationship” between variables? In psychological research, we're referring to a connection between two or more factors that we can measure or systematically vary.

One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation.

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research to determine if changes in one variable actually result in changes in another variable.

Correlational Relationships Between Psychology Research Variables

A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter.

  • A positive correlation is a direct relationship where, as the amount of one variable increases, the amount of a second variable also increases.
  • In a negative correlation , as the amount of one variable goes up, the levels of another variable go down.

In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related, a causal relationship exists.

Psychologists use descriptive, correlational, and experimental research designs to understand behavior . In:  Introduction to Psychology . Minneapolis, MN: University of Minnesota Libraries Publishing; 2010.

Caruana EJ, Roman M, Herandez-Sanchez J, Solli P. Longitudinal studies . Journal of Thoracic Disease. 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

University of Berkeley. Science at multiple levels . Understanding Science 101 . Published 2012.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • AQA Model Answers Info
  • Purchase AQA Model Answers
  • Private Tuition
  • Info + Contact

PsychLogic

AQA A-LEVEL PSYCHOLOGY REVISION NOTES: RESEARCH METHODS

Sign up to the PsychLogic newsletter at the bottom of this page to download printable AQA A-level Psychology notes + AQA A-level Psychology revision guide + how to revise for A-level Psychology + more... The best way to revise Psychology A-level...

PSYCHOLOGY AQA  A-LEVEL UNIT 2 (7182)

The syllabus.

METHODS, TECHNIQUES & DESIGN

  • Primary and secondary data, and meta-analysis. Quantitative and qualitative data
  • Aims, operationalising variables, IV’s and DV’s
  • Hypotheses - directional and non-directional
  • Experimental design - independent groups, repeated measures, matched pairs
  • Validity – internal and external; extraneous and confounding variables; types of validity and improving validity
  • Control – random allocation, randomisation, standardisation
  • Demand characteristics and investigator effects
  • Reliability; types of reliability and improving reliability
  • Pilot studies
  • Correlation analysis – covariables and hypotheses, positive/negative correlations
  • Observational techniques – use of behavioural categories
  • Self-report techniques – design of questionnaires and interviews
  • Case studies
  • Content analysis
  • Thematic Analysis

PARTICIPANTS; ETHICS; FEATURES OF SCIENCE & SCIENTIFIC METHOD; THE ECONOMY

  • Selecting participants and sampling techniques
  • The British Psychological Society (BPS) code of ethics and ways of dealing with ethical issues
  • Forms and instructions
  • Peer review
  • Features of science: objectivity, empirical method, replicability and falsifiability
  • Paradigms and paradigm shifts
  • Reporting psychological investigations
  • The implications of psychological research for the economy

DESCRIPTIVE STATISTICS

  • Analysis and interpretation of quantitative data. Measures of central tendency - median, mean, mode. Calculating %’s. Measures of dispersion – range and standard deviation (SD)
  • Presentation and interpretation of quantitative data – graphs, histograms, bar charts, scattergrams and tables
  • Analysis and interpretation of correlational data; positive and negative correlations and the interpretation of correlation coefficients
  • Distributions: normal and skewed

INFERENTIAL STATISTICS

  • Factors affecting choice of statistics test: Spearman’s rho, Pearson’s r, Wilcoxon, Mann-Whitney, related t-test, unrelated t-test, Chi-Squared test
  • Levels of measurement – nominal, ordinal, interval
  • Procedures for statistics tests
  • Probability and significance: use of statistical tables and critical values in interpretation of significance; Type I and Type II errors
  • Introduction to statistical testing: the sign test

INTRODUCTION

Research Methods is concerned with how psychologists conduct research in an attempt to find evidence for theories . A theory without research support is really just someone’s reasoned opinion, not a proven fact .

Psychologists generally adopt a scientific approach to studying the mind and behaviour. The scientific method is based on empiricism – the belief that one can gain true knowledge of the world through the unbiased observation and measurement of observable, physical phenomena .

Laboratory experimentation is the method most associated with science as it involves the careful manipulation of variables to establish whether there are cause-effect relationships with other variables: for example, will an increase in testosterone cause an increase in aggression?

Psychologists face difficulties, however, in that they are studying highly complex, reactive creatures (humans) who tend not to behave in the predictable way that the objects of study of physics, chemistry and biology do. Equally, people put in an artificial laboratory situation who are aware they are being observed will tend not to behave in a normal, natural way. For this (and various other) reasons, psychologists have developed a variety of other means of research such as field and natural experiments , correlation studies and observations .

A debate exists within Psychology as to what extent it is desirable and/or appropriate to apply scientific methods to the study of humans. Many psychologists have argued that a strictly scientific approach reduces the complexity of human behaviour to an overly reductionist level and that human psychology can be better understood by more detailed and in depth methods such as questionnaires , interviews , case studies and content analysis .

Whereas biological approaches , behaviourism and cognitive psychology tend to favour quantitative , scientific , laboratory based approaches, psychodynamic and humanistic approaches argue for a more qualitative , descriptive approach.

The syllabus focuses on scientific approaches and how to design studies which produce valid (accurate/truthful) findings. There is also an emphasis on the statistical analysis of quantitative data .

>>>>>>>

METHODS, TECHNIQUES & DESIGN ( A-level Psychology revision notes)

PRIMARY AND SECONDARY DATA, AND META-ANALYSIS. QUANTITATIVE AND QUALITATIVE DATA

Psychologists conduct research in an attempt to find evidence for theories . Throughout the history of Psychology there has been an on-going debate in regard to what methods of investigation are appropriate to study the mind and behaviour. Whilst some favour a highly scientific, lab-based, experimental approach , others argue that these methods are inappropriate to the study of humans and support more in depth, less scientific, qualitative approaches such as interviews, case studies and observations.

  • Laboratory Experiments
  • Field Experiments
  • Natural & Quasi Experiments
  • Correlation Studies
  • Observational techniques
  • Self-report Questionnaires
  • Self-report Interviews
  • Case Studies
  • Content Analysis

Each of these methodologies uses different research techniques and has associated strengths and limitations .

Data (information produced from a research study) may be

  • Quantitative : numerical data that can be statistically analysed . This has the advantage of being more objective , quicker to gather and analyse, and can be presented in ways that are easily and quickly understandable. However, data can be superficial , and lacking depth and detail of participants’ subjective
  • Qualitative: written , richly detailed, descriptive accounts of what is being studied. This allows participants to express themselves freely. However, these methods are time consuming , can be costly to collect, difficult to analyse and suffer from problems of subjectivity .

Data gathered by psychologists can be

  • Primary – directly collected by the psychologist them self: e.g. questionnaires, interviews, observations, experiments.
  • Secondary – data collected by others: e.g. official statistics, the work of other psychologists, media products such as film or documentary.
  • Meta-analysis refers to when a psychologist draws together the findings and conclusions of many research studies into 1 single overall conclusion.

LABORATORY EXPERIMENTS ( AQA A-level Psychology revision notes)

Lab experiments are the most complex methodology in terms of their logic and design.

Any experiment begins with an aim .

The aim is a loose, general statement of what we intend to investigate: e.g. does alcohol affect driving performance?

Any experiment looks at the cause-effect relationship between 2 variables . A variable is any factor/thing that can be measured and changes. For example, intelligence, aggression, score on authoritarian personality scale, short-term memory capacity, etc. The two variables in the above example are alcohol and driving performance.

OPERATIONALISING VARIABLES

In psychological research we often want to find a way of expressing a variable numerically. This is referred to as operationalising a variable . Variables can be operationalised in many ways – for example,

  • Intelligence can be operationalised through an IQ test
  • Authoritarianism can be operationalised through a questionnaire
  • STM capacity can be operationalised through a task such as seeing how many digits a participant can remember at once.

INDEPENDENT & DEPENDENT VARIABLES

Of the 2 variables we are testing in an experiment, one is referred to as the Independent Variable (IV) and the other is referred to as the Dependent Variable (DV) .

In an experiment we test 2 conditions of the IV against the DV to see if there is a significant difference between how the 2 conditions of the IV affect the DV .

For example, we could set up an experiment to examine the cause-effect relationship between alcohol and driving performance . To do this we could recruit 100 volunteer participants , randomly split them into 2 groups of 50 , give the 1 st group a measure of alcohol and then let them drive on a driving simulator which would produce a score of x/20 for driving performance. The 2 nd group would be given no alcohol and allowed to drive on the simulator. Therefore, we would end up with 50 scores of x/20 for those who had driven after consuming alcohol, and 50 scores of x/20 for those who had driven and not consumed alcohol.

We could take the mean average score for each group and compare them. For example, we may find that those who had drunk alcohol scored a mean average of 10/20 whereas those who hadn’t consumed alcohol scored an average of 16/20. What we have done in this experiment is to test 2 conditions of the IV (alcohol and no alcohol) against the DV (driving performance) to see if there is a significant difference between how the 2 conditions of the IV affect the DV . If we find a significant difference between how the 2 conditions of the IV affect the DV we have found evidence that there is a cause-effect relationship between alcohol consumption and poor driving performance .  

AQA A LEVEL PSYCHOLOGY IV + DV

From the aim of our experiment we formulate our hypotheses.

A hypothesis is an exact, precise, testable prediction of what we expect to find in an experiment.

  • The Experimental/Alternative Hypothesis : a statement predicting that we will find a difference between how the 2 conditions of the IV affect the DV: e.g. ‘There will be a significant difference in driving performance between participants who have and have not consumed alcohol’ .

The above hypotheses are non-directional (or 2-tailed) hypotheses. This means that they do not make a prediction about the direction of results : i.e. they don’t predict that 1 of the groups is going to do better or worse than the other, they just predict that some kind of difference will occur.

However, if the experimenter strongly expects that results will go in a certain direction or previous research indicates this he may choose to apply a directional (or 1-tailed) hypothesis. This does make a prediction about the direction of results.

  • Experimental Hypothesis (1-tailed): ‘Participants who have consumed alcohol will show significantly poorer driving performance than participants who have not consumed alcohol’.

EXPERIMENTAL DESIGN

In any experiment we always have at least 2 groups of participants performing in at least 2 experimental conditions . There are several different ways in which we can allocate (put) participants to different conditions each with associated strengths and limitations .

1. Independent Groups Design . Participants are split into 2 groups , each group performing in 1 condition only .

The limitations of this design are

  • Participant Variables – the fact that individual differences between participants may affect the DV without us being aware of it and thus reduce the validity (accuracy) of our results. For example, we may find that participants in the alcohol condition are all excellent drivers with high alcohol tolerance, whilst participants in the no-alcohol condition are all poor drivers. Thus, the alcohol group may drive better and we might (falsely) conclude that alcohol improves driving performance. The problem of participant variables is reduced with a large sample and by randomly allocating participants to the 2 conditions.
  • It requires more participants than a repeated measures design .

The advantage of this design is that we will not encounter Order Effects (see below).

2. Repeated Measures Design . In this design all participants perform in the 1 st condition and then perform in the 2 nd condition . This allows us to directly compare participants’ performance across the 2 conditions.

  • Order Effects – when participants perform in condition 1 then condition 2 their performance in the 2 nd condition may either improve due to practise or get worse due to boredom or tiredness . In an attempt to overcome the problem of order effects we can use counterbalancing . This involves ½ the participants performing in condition 1 first, then condition 2, while the other ½ of the participants perform in condition 2 first, then condition 1. (This is thought to balance out the problem of order effects).
  • They may also work out the aim of the study and exhibit demand characteristics (see below).

The advantage of this design is that there is no possibility of participant variables threatening the validity of the study.

3. Matched Pairs Design : This design overcomes the problem of order effects and participant variables . Before the study begins we need to find participants who we can match with each other in terms of relevant characteristics such as age, gender, IQ, etc. The study then runs as an independent groups design , however, because each participant is matched with another participant in the other condition participant variables are less of a problem. The disadvantage of this design is it may be costly, time-consuming and difficult to find participants who match precisely .

It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions.

Validity generally refers to the truthfulness and accuracy of our findings.

We can distinguish between 2 types of validity.

  • INTERNAL/EXPERIMENTAL VALIDITY . This relates to whether we are really measuring what we think we are measuring. In any experiment we are trying to isolate the effect of the IV on the DV . Therefore, we need to ensure that no other unwanted, uncontrolled extraneous variables are affecting the DV without our knowledge. If an extraneous variable does affect our final results, we refer to as a confounding (i.e. confusing) variable.

AQA A LEVEL PSYCHOLOGY EXTRANEOUS VARIABLES

  • Ecological Validity . This relates to the problem of whether studies conducted under highly controlled, artificial, lab situations can produce findings that can be generalised to everyday life, or whether behaviour shown by participants will be artificial . For example, in the drink-driving study, participants use a driving simulator which is not really similar to driving in a real car on a real road.
  • Population Validity . If we only use small or biased/unrepresentative samples of participants, we may not be able to generalise findings to human behaviour in general.
  • Temporal Validity . If studies were conducted a long time ago, it can be argued that their findings are not relevant to the present day. For example, Asch’s conformity study was conducted in 1950’s America and it has been argued that the climate of America at this time was particularly conformist. Social change since the 50’s has meant that people are now far more non-conformist and independent.

CONTROL OF EXTRANEOUS VARIABLES; RANDOM ALLOCATION, STANDARDISATION

Extraneous variables are variables which the experimenter has failed to eliminate or control which are affecting the DV without us being aware of it. This threatens the validity of the study and the accuracy of our findings.

Extraneous variables must be carefully and systematically controlled . When designing an experiment, researchers should consider the following areas where extraneous variables may arise:

  • Random allocation/randomisation of participants to experimental conditions. To avoid any bias on the behalf of the researcher, participants should always be divided into groups randomly.
  • Standardisation of instructions and procedures. Participants should be given exactly the same instructions as each other and go through exactly the same procedures as each other to avoid differences in these acting as extraneous variables.
  • Participant variables : participants’ age, intelligence, personality and so on should be controlled across the different groups taking part. For example, in the above experiment: gender, driving experience, alcohol tolerance, body mass, etc. Participants could also be pre-tested and put into a matched-pairs design.
  • Situational variables : the experimental setting and surrounding environment must be controlled. This may include the time of day, the temperature or noise effects.
  • Order effects : participants may improve or get bored performing in different conditions. This can be controlled by using independent groups, matched participants or counter-balancing.
  • Demand Characteristics or Investigator Effects (see below).
  • A control group is a group of participants from who act as a baseline from which differences in the experimental group are measured. For example, we might compare improvements in mood scores for an experimental group who received therapy against a control group who none.

TYPES OF VALIDITY AND IMPROVING VALIDITY

It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions. If studies are to be regarded as credible, they must be valid .

The following techniques are used to check for/achieve/ensure validity .

  • Face validity is the extent to which a test is subjectively viewed as being able to measure the concept it claims to measure. In other words, a test can be said to have face validity if it "looks like" it is going to measure what it is supposed to measure.
  • Content Validity involves independent experts being asked to assess the validity/accuracy/appropriateness of instruments/tests used to measure a variable: e.g. agreeing that a particular IQ test is a valid measure of intelligence.
  • Concurrent Validity involves comparing the validity of a new test/measure against an established test/measure whose validity is already known and trusted. For example, the results of a new form of IQ test could be tested against an old, established IQ test. If scores correlate between the 2 tests they are said to have concurrent validity.

THE RELATIONSHIP BETWEEN RESEARCHER AND PARTICIPANTS

The fact that an experiment is a social situation means that behaviour may be affected by the presence of others (experimenter and other participants) and the expectations that participants have. Thus, we may not be getting a valid picture of how people behave in the real world.

  • Demand Characteristics refers to the fact that participants realise they are in an experiment and are being observed and tested. They may, therefore, alter their behaviour either to behave in ways they think the experimenter wants them to behave in or according to how they think they should behave. Participants may try to work out the aim of experiment and modify their behaviour accordingly. They may also show ‘social desirability bias’ – giving responses they believe are correct or moral, rather than answering honestly.
  • Investigator Effects refers to the fact that the experimenter may consciously or unconsciously gives hints or clues to research participants about how he wants or expects them to behave.

RELIABILITY

Reliability of a study refers to the issue of if we conduct the study again will the study produce similar results ? Clearly, if a study produces wildly varying results each time it is carried out there is either no real cause-effect relationship between the IV and the DV or the design of the study is invalid . Therefore, repeating a study confirms previous findings.

TYPES OF RELIABILITY

Inter-rater reliability

  • If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability . This means that observers are all defining behaviours and recording observations in the same way as each other . Thus, before the study begins observers should be trained through the use of, for example, a training video where they learn and are then tested on how to define and categorise behaviours in the same way as each other. We can assess inter-rater reliability by analysing the correlation between different observers score on the same behaviour. This will produce a correlation coefficient (see Correlation Studies and Spearman’s rho test): e.g. +0.96 = a strong positive correlation (they are rating things in the same way as each other).

Test-retest reliability

  • Reliability of a test (e.g. IQ test) or questionnaire can be tested by asking a participant to complete the test/questionnaire, then complete it again 2 weeks and a month later. If answers are similar over a period of time, then the test/questionnaire can be said to have reliability. We can assess test-retest reliability by analysing the correlation between different test scores. This will produce a correlation coefficient (see Correlation Studies): e.g. +0.96 = a strong positive correlation (high similarity between different test scores).

PILOT STUDIES

A pilot study is a small scale version of the main study that is conducted in advance to ensure

  • The procedures of the study will run smoothly
  • That equipment/tests are functioning accurately
  • That participants understand instructions
  • That all extraneous variables are controlled

  STRENGTHS OF LABORATORY EXPERIMENTS

  • High degree of control : experimenters can control all variables in the experiment. The IV and DV can be precisely defined (operationalised) and measured to assess cause-effect relationships - for example, the amount of caffeine given (IV) and reaction time (DV). This leads to greater accuracy and objectivity.
  • Replication : other researchers can easily repeat/replicate the experiment and check results for reliability . This is much easier in a controlled laboratory situation as opposed to a field experiment conducted in the real world.

LIMITATIONS OF LABORATORY EXPERIMENTS

  • Lack of ecological validity.
  • Demand characteristics.

(Explain both these points in full according to above notes.)

FIELD EXPERIMENTS ( Psychology A-level revision)

A field experiment is carried out in the real world rather than under artificial laboratory conditions. Participants are exposed to ‘set-up’ social situation to see how they respond. The ‘naïve’ participants are unaware they are taking part in an experiment.

STRENGTHS OF FIELD EXPERIMENTS

  • As the experiment is conducted in the real world levels of ecological validity are increased meaning that we can generalise behaviour to real-life behaviour.
  • As participants do not know they are involved in an experiment they will not show demand characteristics .

LIMITATIONS OF FIELD EXPERIMENTS

  • As the study is not conducted under tightly controlled laboratory conditions there is a greater chance that extraneous variables will influence the DV without the researcher being aware of this.
  • Field experiments often involve breaking ethical guidelines : e.g. failing to get participants consent, deceiving participants, failing to inform them of their right to withdraw or debriefing them, etc.

NATURAL & QUASI EXPERIMENTS ( A-level Psychology revision)

In a natural experiment the psychologist does not manipulate or ‘set up’ a situation to which participants are exposed to, rather they observe a change in the natural world (IV) and assess whether this has an effect on another variable (the DV) . For example, whether the introduction of TV into remote communities (IV = (i) no TV, and (ii) TV) and measuring whether this has had an effect on children’s’ aggressiveness (DV). A quasi-experiment is the same as a normal experiment but participants are not randomly allocated to conditions .

STRENGTHS OF NATURAL/QUASI EXPERIMENTS

  • As the experiment is conducted in the real world levels of ecological validity are increased.
  • In natural experiments, as participants do not know they are involved in an experiment they will not show demand characteristics.

LIMITATIONS OF NATURAL/QUASI EXPERIMENTS

  • Natural experiments may involve breaking ethical guidelines : e.g. failing to get participants consent to be observed, failing to inform them of their right to withdraw or debriefing them.

CORRELATION ANALYSIS ( AQA A-level Psychology revision)

 A correlation study involves measuring the relationship between 2 covariables : e.g. height and weight, stress and illness, ‘A’ Level point score and income aged 30, etc. (However, correlation studies only measure whether there is  some kind of relationship , not whether there is a cause-effect relationship .)

 The relationship may either be

AQA A LEVEL PSYCHOLOGY POSITIVE CORRELATION

To conduct a correlation study we need to operationalise the 2 co-variables and their relationship can then be plotted on a scattergram for each participant. The general pattern revealed should indicate whether the relationship is positive or negative and how weak or strong the relationship is. However, we can conduct statistical analysis of our data to produce a correlation coefficient : a number somewhere between -1 and +1 which will indicate the exact direction and strength of relationship between the 2 co-variables.

AQA A LEVEL PSYCHOLOGY CORRELATION COEFFICIENT

HYPOTHESES FOR CORRELATION STUDIES

Whereas hypotheses for experiments predict there will be a ‘difference’ between how the 2 conditions of the IV affect the DV, hypotheses for correlation studies predict there will be a ‘relationship’ between 2 co-variables.

Hypotheses can be directional or non-directional depending on whether or not past research indicates whether we should expect to find a relationship (either positive or negative).

  • 2-Tailed Experimental Hypothesis : ‘There will be a significant correlation between stress and illness’.
  • 1-Tailed Experimental Hypothesis : ‘There will be a significant positive correlation between stress and illness’. (This could also be predicting a negative correlation.)

  STRENGTHS OF CORRELATION STUDIES

  • Correlation studies allow us to assess the precise direction and strength of relationship between 2 co-variables using correlation coefficients (see above).
  • Correlation studies are a valuable preliminary (initial) research tool . They allow us to identify relationships between variables that we may then decide to investigate in more detail through experimentation.

LIMITATIONS OF CORRELATION STUDIES

  • Correlation studies only tell us that there is some kind of relationship between 2 variables, they do not tell us about cause-effect relationships , and thus they are a weaker methodology than lab experiments.
  • We may sometimes find a correlation between 2 variables by pure chance , even when no real relationship exists between the variables – thus they may be misleading. For example, there is an almost perfect negative correlation between Nigerian iron exports and the UK birth rate between 1870 and 1920 even though these factors are completely unrelated.

OBSERVATIONAL TECHNIQUES ( AQA A-level Psychology revision guide)

Observations simply involve observing behaviour in the natural environment .

Observations may be

  • Overt : the psychologist’s presence is made known to the group being studied. This may lead to demand characteristics and participants behaving in unnatural ways.
  • Covert : the psychologist’s presence is hidden . Either he appears as a normal member of the public or is his presence is concealed in some way (e.g. CCTV camera). Although this overcomes the problem of demand characteristics , there are ethical issues to do with deception, lack of consent and invasion of privacy.
  • Participant : the psychologist joins the group being studied. This may be covert or overt.
  • Non-Participant : the psychologist remains outside the group being studied. This may be covert or overt.

Observational studies can be conducted in real life situations (naturalistic observations) or in laboratories (which provide more control – controlled observations ). Behaviours observed can be recorded in a qualitative form or can be counted/quantified .

For example, we may wish to conduct an observational study of gender differences in aggressive behaviours amongst 5-7-year olds. A tally chart can be constructed to record observations and behavioural classifications/categories .

AQA A LEVEL PSYCHOLOGY OBSERVATION TALLY CHART

This chart allows us to make statistical statements about behaviours: e.g. boys punch 4 times more than girls do.

One way of recording behavioural categories is event sampling (as in the example above – recording the number of times a particular event occurs); the other is time sampling – recording what is occurring at certain time intervals: e.g. every minute.

If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability (see section of Reliability above).

STRENGTHS OF OBSERVATIONAL STUDIES

  • During covert observations there are high levels of ecological validity and no demand characteristics . Participants are unaware that they are being observed and they are in a natural environment – thus we are observing behaviour as it naturally occurs.
  • With participant observation the psychologist can question participants and get a much more in depth insight into the behaviours, beliefs and motivations of the group being studied . Thus, a much deeper, richer, descriptive picture of behaviour is produced.

  LIMITATIONS OF OBSERVATIONAL STUDIES

  • With covert observations ethical issues arise concerning invasion of privacy, lack of consent, deception and lack of right to withdraw.
  • With overt observations participants may exhibit demand characteristics and act in socially-appropriate or otherwise unnatural ways.

SELF-REPORT METHODS: QUESTIONNAIRE SURVEYS & INTERVIEWS ( A-level Psychology resources)

 The term self-report simply means that the participant is reporting on their own perception/view of themselves – either using a questionnaire or an interview .

 For either technique:

  • Social desirability bias may be an issue in that if a participant knows their answers will be read/heard by someone else they may say what they think is socially acceptable/desirable rather than the truth. To combat this, questionnaires can be kept anonymous and confidential.
  • Self-report studies are also subjective in that the individual’s perception of themselves may be quite different from how others view them.

QUESTIONNAIRES

Questionnaires can be:

  • Closed ended .

E.g. I intend to vote for Joe Biden.

AQA A LEVEL PSYCHOLOGY CLOSED-ENDED QUESTIONNAIRE

Closed ended questions allow us to produce quantitative data: e.g. statistical statements such as 45% of participants agreed.

  • Open ended .

Produce lengthier answers – richly descriptive, qualitative data.

E.g. Explain why you intend to vote for Joe Biden.

__________________________________________________

When constructing questionnaires, we must try to ensure that the questions we ask are clear , concise , non-ambiguous , and easily understandable, and will be interpreted by all participants in the same way as each other.

We may also want to check the reliability of the questionnaire through test-retest reliability . Open-ended questionnaires can be thematically analysed (see later section on this).

STRENGTHS OF QUESTIONNAIRES

  • Closed-ended questionnaires are capable of providing large amounts of information from large amounts of people fairly cheaply and quickly .
  • Closed-ended questions can be statistically analysed to allow us to make statements about %’s of people who hold certain beliefs, etc.
  • Open-ended questions allow us to gain an in depth insight into participants’ personal opinions and the motives that underlie behaviours and beliefs .

  LIMITATIONS OF QUESTIONNAIRES

  • If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if a questionnaire asks whether someone holds racist beliefs it is unlikely they will admit to this to a researcher. This can be overcome by making questionnaires anonymous and confidential.
  • Open-ended questions can be difficult to interpret and analyse as participants may give lengthy answers. This makes it hard to understand broad patterns and trends in participants’ beliefs and behaviours.

Interviews can be conducted with individuals or groups either face-to-face or telephone/internet. The respondent can describe their response in depth and detail (qualitative data) and say what they want to say rather than filling out pre-set answer choices (e.g. questionnaires). Interviews can be thematically analysed (see   later section on this).

Interview questions can be:

  • Structured : a pre-set list of questions is asked.
  • Unstructured : the interview progresses as more of an on-going conversation between interviewer and interviewee.

STRENGTHS OF INTERVIEWS

  • Interviews provide richly detailed qualitative descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations .
  • With open-ended questions , interviewees may be able to suggest and shed light on further areas of research and interest relating to the topic they are being interviewed about.
  • Structured interviews allow all participants to be asked the same questions , making general patterns in answers easier to analyse and keep the interview limited to the subject matter the interviewer wants to cover.

LIMITATIONS OF INTERVIEWS

  • If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if an interviewer asks whether someone holds racist beliefs it is unlikely they will admit to this.
  • Open-ended questions can be difficult to interpret and analyse as participants may give lengthy, personal answers. This makes it harder to analyse broad patterns and trends in participants’ beliefs and behaviours.

CASE STUDIES ( AQA A-level Psychology resources)

These are longitudinal studies (conducted over a long period of time) which focus in great detail on an individual or a small group . They are often used in the field of psychopathology and child development, and may include a variety of methods such as unstructured interviews and observations .

STRENGTHS OF CASE STUDIES

  • Case studies provide richly detailed descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations .
  • Case Studies usually follow the progress and changes an individual goes through over time.

LIMITATIONS OF CASE STUDIES

  • Case studies are associated with problems of subjectivity and personal interpretation on the behalf of the psychologist: e.g. the psychologist may be biased in their viewpoint and interpretation of events and behaviour: for example, with the case study of Little Hans, Freud was accused of interpreting Hans’ behaviour to make it support his theory of the Oedipus Complex. Thus, because case studies do not use controlled scientific methods of experimentation, they are thought to lack scientific objectivity and proof.
  • For the above reason, and for the fact they are only carried out on one individual, case studies suffer a lack of reliability and generalisability .

CONTENT ANALYSIS ( A-level Psychology notes)

This is a technique where researchers identify themes or behavioural categories and count how many times they occur (see   later section on thematic analysis) . It is often used with written or visual material such as interviews, open-ended questionnaires, diaries, magazines, films, etc.  A coding system of categories will be developed whereby we count certain times a particular piece of content arises.

For example, we might ask mothers with children who have just started primary school to keep a diary of their child’s response to this and then count how many times categories such as ‘child crying’, ‘child showing clingy behaviour’, ‘child showing anger to mother’ occur.

STRENGTHS OF CONTENT ANALYSIS

  • It allows qualitative data (writing or visual material) to be put into a quantitative form (counting behaviours) , so that statistical analysis can take place and data can be represented in tables and graphs.

LIMITATIONS OF CONTENT ANALYSIS

  • Constructing a coding system involves the risk of an investigator imposing their own meaning on the data. The investigator might choose coding categories they think are important and overlook categories which actually are important. Thus, there may be problems of subjectivity and personal bias .

THEMATIC ANALYSIS ( AQA A-level Psychology notes)

  Interviews, open-ended questionnaires and content analysis (all qualitative research techniques) can be analysed in terms of themes which occur in the content of responses given by participants.  We can count these themes to produce quantitative data . For example, if we interviewed adults who had experienced maternal deprivation as an infant we could analyse what major themes occurred in interviews (e.g. feelings of loss, desire for love, etc.) and count how many times these themes occurred.

STRENGTHS OF THEMATIC ANALYSIS

  • We can turn complex qualitative data into quantitative data which can then be statistically analysed. For example, 65% of participants referred to feelings of loss in their interviews.

LIMITATIONS OF THEMATIC ANALYSIS

  • If a number of researchers are conducting thematic analysis on the same data they may interpret and count themes in a different way to each other which would lead to a lack of reliability. (This could be overcome through testing for inter-rater reliability.)

PARTICIPANTS & SAMPLING ( A-level Psychology revision notes)

It is important to select participants carefully when conducting research to ensure the study has population validity (see section on Validity above).

The term population refers to all the people within a certain category whom we wish to study: e.g. all schizophrenics, all 5-11 year olds, all pregnant women, etc. From this population we draw a smaller sample . Ideally, we want our sample to be fairly large and to be representative of the population as a whole (i.e. a good cross-section in terms of age, gender, ethnicity, etc.)

With a large , representative, random sample of participants we should be able to generalise (apply) our findings to the population as a whole (i.e. say that what is true of our sample is true of the population as a whole).

There a number of different sampling methods we can employ to select participants each with its own advantages and disadvantages.

  • Random sampling . The sample is randomly selected from the population: e.g. picking names at random out of a hat. Although this method is truly random it does not guarantee a representative sample .
  • Volunteer (self-selecting) sampling . Participants respond to an advert placed by the researcher: e.g. Milgram’s obedience study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to volunteer. However, volunteers are likely to make motivated and cooperative participants in research.
  • Opportunity sampling . Potential participants are approached by the researcher and asked whether they would be willing to take part in a study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to agree to take part. However, those who do are likely to make motivated and cooperative participants in research.
  • Systematic sampling . Taking every ‘nth’ person on a list: e.g. every 10 th person on a school register. Not random or guaranteed to be representative .
  • Stratified sampling . The population is assessed for what proportion of particular characteristics it contains (e.g. age, gender, ethnicity, social class, etc.) and representative numbers of participants possessing these characteristics are randomly sampled to form the sample.

For example, a school population of 1000 students has 40% boys and 60% girls, and 50% of all students are below the age of 16 and 50% are 16 +.

If we wanted a stratified sample of 100 students we would select

  • 40 boys (40% of all students) and 60 girls (60% of all students)
  • 20 boys below the age of 16 (50% of the 20 boys)
  • 20 boys above the age of 16 (50% of the 20 boys)
  • 30 girls below the age of 16 (50% of the 20 girls)
  • 30 girls above the age of 16 (50% of the 20 girls)

AQA A LEVEL PSYCHOLOGY STRATIFIED SAMPLING

Stratified sampling is truly representative and random.

ETHICAL ISSUES AND WAYS OF DEALING WITH THEM ( AQA A-level Psychology revision notes)

The British Psychological Society (BPS) publish ethical guidelines which psychologists are supposed to follow when planning and conducting research.

DECEPTION AND INFORMED CONSENT  

Participants should not be deceived (lied to) or involved in experiments unless they have agreed to take part. One way of dealing with this is to make sure that the participant is told precisely what will happen in the experiment before requesting that he or she give voluntary informed consent to take part. In reality, many experiments require some level of deception to avoid demand characteristics, hence it is often difficult to receive fully informed consent.

For example, Milgram got consent to take part in an experiment, but not informed consent as participants did not know the true aim of the study.

Dealing with Deception and Lack of Informed Consent

  • At the end of the experiment participants should be informed about the aims, findings and conclusions of the investigation and the researcher should take steps to reduce any distress that may have been caused by the experiment. This may be in the form of counselling . They should also be asked if they have any questions.
  • Presumptive Consent . The general public are surveyed and asked whether they believe that the breaking of ethical guidelines in a particular study is justified or not . This solution is often used in relation to experiments where participants cannot be asked for consent as the study requires them to remain naïve: e.g. field experiments such as Hofling.
  • Prior General Consent . In this proposed solution, people volunteer to take part in research at some point in the future . Thus, they serve as a pool of participants who may be used at some point in the future.
  • Retrospective consent involves asking the participants for consent after they have participated in the study.
  • In the case of young children or the mentally ill , parents or guardians can provide consent if they judge a procedure is in the client’s best interests: e.g. whether a child with ADD should be prescribed a drug. Approval could also be obtained after consulting professional colleagues: e.g. psychiatrists debating whether a depressed patient would benefit from a drug treatment.

RIGHT TO WITHDRAW

Participants should have the right to withdraw from an experiment at any time.

They should be informed of this right in the standard briefing instructions given to them before the experiment commences. They have the right to insist that any data they have provided during the experiment should be destroyed.

PROTECTION FROM PHYSICAL AND PSYCHOLOGICAL HARM

Participants should be exposed to no more risk than they would encounter in their normal lives. They should also be protected from any kind of psychological harm such as stress, embarrassment or damage to their self-esteem .  If participants are showing signs of distress they should be reminded of their right to withdraw .

CONFIDENTIALITY

Information about participants’ identities should not be revealed and can be kept confidential by ensuring participants’ identities remain anonymous and confidential. Freud, for example, gave his clients pseudonyms: e.g. Little Hans.

FORMS & INSTRUCTIONS ( Psychology A-level revision)

CONSENT FORM

If asked to write a consent form, to get full marks you must provide sufficient information on both ethical and methodological issues for participants to make an informed decision. You must also write as it would be read out to participants.

The form should contain

  • The purpose of the study
  • The length of time required of the participants
  • Details of any parts of the study that participants might find uncomfortable
  • Details about what will be required of them, and what they will have to do
  • There is no pressure to take part in the study at all
  • Right to withdraw (they can leave at any time, without giving a reason, keep any money they have been paid, and any data collected on them will be destroyed)
  • Reassurance about protection from harm
  • Reassurance about confidentiality of the data
  • They should feel free to ask the researcher any questions at any time
  • They will receive a full debrief at the end of the programme

STANDARDISED INSTRUCTION FORM FOR PARTICIPANTS

You need to use the details in the description of the study to write an appropriate set of instructions for participants. The instructions should be clear, concise, use formal language and be as straightforward possible. They must:

  • Explain the procedures of this study relevant to participants.
  • Include a check of understanding of instructions.

(This is not a consent form so references to ethical issues are not necessary.)

PEER REVIEW ( A-level Psychology revision)

Peer review is the process by which psychological research papers are subjected to independent scrutiny (close examination) by other psychologists working in a similar field who consider the research in terms of its validity and significance . Such people are generally unpaid . Peer review happens before research is published.

Peer review is an important part of this process because it provides a way of checking the validity of the research, making a judgement about the credibility (believability) of the research, and assessing the quality and appropriateness of the design and methodology .  It is a means of prevent incorrect data entering the public domain. This is important to ensure that any funding is being spent correctly .

Peers are also in a position to judge the importance or significance of the research in a wider context .  They can also assess how original the work is and whether it refers to relevant research by other psychologists.  They can then make a recommendation as to whether the research paper should be published in its original form, rejected or revised in some way.  This peer review process helps to ensure that any research paper published in a well-respected journal can be taken seriously by fellow researchers and the public. 

MAJOR FEATURES OF THE SCIENTIFIC METHOD ( AQA A-level Psychology revision)

Science is the unbiased observation and measurement of the natural world. It is the only tool humanity has developed for establishing factual truths about the world. Science allows us to establish the laws of physical world and from this knowledge create technology .

Since the 1700’s the scientific method has been developed, scrutinised and refined.

Major features of the scientific methods are

  • Empiricism – Information is gained through direct observation or experiment on physically observable and measurable phenomena rather than by reasoned argument, unfounded beliefs, faith or superstition.
  • Objectivity – Scientists should strive to be unbiased and non-interpretative in their observations and measurements. Prior expectations and preconceptions should be put aside. Subjective can be thought of as biased, personal and interpretive.
  • Replicability – One way to demonstrate the validity of any observation or experiment is to repeat it. If the outcome is the same, this confirms the truth of the original results, especially if the observations have been made by a different person. In order to achieve such replication it is important for scientists to record their methods carefully so that the same procedures can be followed in the future.
  • Control – Scientists seek to demonstrate causal relationships between variables. The experimental method is the only way to do this – where we vary one factor (the independent variable) and observe its effect on a dependent variable. In order for this to be a ‘fair test’ all other conditions must be kept the same, i.e. controlled . This allows us to establish the cause-effect relationships which underlie the laws of nature.
  • Theory construction – One aim of science is to record facts, but an additional aim is to use these facts to construct theories to help us understand and predict the natural world. A theory is a collection of general principles that explain observations and facts . Theories should be based a sound body of valid and reliable scientific study.
  • Hypothesis Testing – A good theory must be able to generate testable hypotheses . Popper developed the concept of falsification – the only way to really prove a theory correct is to disprove it: if it can’t be disproved it must be correct.

PARADIGMS AND PARADIGM SHIFTS ( AQA A-level Psychology revision guide)

A paradigm refers to the accepted and approved of ways of thinking, understanding, theorising and researching that exist and are shared within any one particular science. For example, biologists all tend to work within a paradigm where they accept basic concepts (evolution and Darwinian theory) as true and agree on how biology should be studied (scientific experimentation).

Psychology is often described as pre-paradigmatic as there is no complete, shared agreement between psychologists about how they should understand and explain human behaviour or what the best methods to study behaviour are. Psychology encompasses a number of conflicting approaches (e.g. behaviourism, biological, cognitive, psychodynamic, evolutionary, etc.) which disagree over what the major influences are on behaviour and what methods should be employed to study behaviour.

A paradigm shift occurs when there is a fundamental change in how scientists in a particular field understand and research subject matter due to evidence proving that the previous paradigm was inadequate/incorrect in some way. For example, in the field of physics, Newton’s laws were the dominant paradigm from the 18 th to early 20 th century before the work of Einstein resulted in a paradigm shift in the way in which physicists understood the physical laws of the natural world.

CONVENTIONS FOR REPORTING PSYCHOLOGICAL INVESTIGATIONS ( A-level Psychology resources)

Psychological investigations are written up/reported in the same way by all psychologists.

Abstract – A summary of the study covering the aims/hypothesis, method/procedures, results and conclusions. Allows a reader to gain a quick overall understanding of a study.

Introduction/Aim/Hypotheses – What the researchers intend to investigate. This often includes a review of previous research (theories and studies), explaining why the researchers intend to conduct this particular study. The researchers may state their research predictions and/or a hypothesis or hypotheses.

Method – A detailed description of what the researchers did , providing enough information for replication of the study. Included in this section is:

  • Information about the participants (how they were selected , how many were used, and the experimental design )
  • The independent and dependent variables
  • The testing environment
  • Materials used
  • Procedures used to collect data
  • Any instructions given to participants before (the brief ) and afterwards (the debrief )

For full marks, the method section should be written clearly , succinctly and in such a way that the study would be replicable . It should be set out in a conventional reporting style, possibly under appropriate headings . The important factor here is whether the study could be replicated.

Results – This section contains statistical data including descriptive statistics (tables, averages and graphs) and inferential statistics (the use of statistical tests to determine how significant the results are).

If you are asked to outline and discuss the results of a study mention the following points

  • Write the results out clearly in words: e.g. ‘the mean number of objects remembered for participants listening to music was seven, but for those not listening to music was nine’.
  • Refer to the standard deviation or range and explain what they mean, e.g. ‘those listening to music had a higher standard deviation than those not listening to music, meaning that their scores varied more around the mean. So there were more individual differences in participants’ memories when listening to music.’
  • Say whether the results were significant and how you know this (refer to the OV, CV and level of significance), and what it means if they were.
  • Discuss issues of validity
  • Discuss issues of reliability
  • The researchers offer explanations of the behaviours they observed and might also consider the implications of the results (how it can be applied to the real world) and make suggestions for future research.
  • The researchers must consider their work critically, and evaluate it in terms of validity, reliability, any short-comings or criticisms, etc.
  • Discuss how their research relates to the background research discussed in their introduction.

THE IMPLICATIONS OF PSYCHOLOGICAL RESEARCH FOR THE ECONOMY ( AQA A-level Psychology resources)

Although it is difficult to quantify how much psychology contributes to the economy, Psychology university departments receive over £50 million in research grants annually.

Psychological research is used in diverse fields such as medicine, psychiatry, therapy, social work, childcare, advertising, marketing, business, forensic in crime, the army, education, etc.

Apart from direct benefits, Psychology indirectly contributes to the economy: for example, in the UK, 40% of people claiming incapacity benefits are doing so due to anxiety or depression, therefore, psychotherapy may assist the long-term unemployed in returning to work which causes increased tax revenue.

Psychology may also assist in finding solutions to wider social problems relating to crime, aggression, child abuse, etc. This could contribute to the economy by reducing levels of crime (theft and damage to properties), reducing prison population (paid for by the tax-payer) and increased taxation (people working rather than being in prison).

DESCRIPTIVE STATISTICS ( A-level Psychology notes)

Once a study has been conducted that produces quantitative data , patterns and trends can be simply analysed using some of the following techniques.

MEASURES OF CENTRAL TENDENCY

This refers to the 3 forms of average – Mean, Median and Mode – which tell us about the average within a set of data.

For example, a set of scores are produced in a memory test:

5, 7, 8, 8, 10, 11, 14, 15, 45

Add all scores and divide by total number of scores:  123 divided by 9 = 13.67

  • An advantage of the mean is that it is the truest form of average because it uses all scores within a set of data.
  • A disadvantage is that the mean may be artificially inflated or deflated by extreme scores (outliers) in a set of data (in such a case we can say that the data is skewed ). In the above example the extreme score of 45 artificially inflates the mean to an unrealistically high level .

The median is the middle score in a set of ranked (put in order from low to high) data.

  • An advantage of the mode is that It is not affected by extreme scores ( outliers ).
  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data.

E.g. 2, 4, 4, 5, 9, 15, 16 Median = 5 (Take mean average if 2 numbers in middle).

       2, 4, 5, 9, 15, 16, 17 Median = 9 (Take mean average if 2 numbers in middle).

The most frequently occurring score in a set of data.

  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data. Also, set of scores may have no mode value .

E.g.  2, 2, 4, 5, 9, 15, 16   Mode = 2

         2, 3, 4, 5, 9, 16, 16 Mode = 16

CALCULATING %’s

To calculate how much 1 number is as a percentage of another number divide the 1 st number by the 2 nd and multiple by 100.

For example, if Bob earns £26,060 a year and Nicola earns £137,540 then 

26,060/137,540 x 100 = 18.94

Therefore, Bob earns 18.94% of Nicola’s salary.

MEASURES OF DISPERSION

These tell us about the ‘spread’/‘dispersion’/’variability’ within a set of scores – the range and the standard deviation (SD).

This simply tells us about the range of scores in a set of data . The range is calculated by taking the highest score and subtracting the lowest score.

THE STANDARD DEVIATION (SD)

The standard deviation tells us about the amount of variability from the mean .

For example, 2 classes of students with 2 different psychology teachers gained the following % scores in an end of year test.

GROUP 1: 18, 24, 31, 46, 55, 64, 79, 82, 90, 98.  Mean = 59

GROUP 2: 49, 52, 54, 57, 68, 60, 62, 64, 66, 68.  Mean = 60

Although the 2 groups have very similar mean scores, GROUP 1 have a much larger SD – there is a lot of variability from the mean whereas there is little variation from the mean in GROUP 2.

The SD is a stronger measure of dispersion than the range because

  • The SD is a measure of dispersion that is less easily distorted by a single extreme score .
  • The SD takes account of the distance of all the scores from the mean.
  • The SD d oes not just measure the distance between the highest score and the lowest score .

DISPLAYS OF DATA ( AQA A-level Psychology notes)

Quantitative data can be plotted on a variety of graphs and charts.

GRAPHS are used to display continuous scores ( ordinal data : see Inferential Statistics below). For example, to record participants scores in a memory test (x/20).

AQA A LEVEL PSYCHOLOGY GRAPHS

HISTOGRAMS are graphs converted to show interval scores (rather than continuous ones). (See Inferential Statistics below.)   

AQA A LEVEL PSYCHOLOGY HISTOGRAMS

BAR CHARTS are not used to display scores - rather they display categories of information ( nominal data : see Inferential Statistics below). For example, number of participants in a particular category such as: favourite colour, borough of London lived in, participants studied at A Level, etc.

AQA A LEVEL PSYCHOLOGY BAR CHARTS

Note: whereas histogram bars join because they display continuous sets of scores, bar chart bars are separate as they show separate categories of information. 

SCATTERGRAMS are used to display data from correlation studies (see previous notes on Correlation Studies).

AQA A LEVEL PSYCHOLOGY SCATTERGRAMS

DISTRIBUTIONS: NORMAL AND SKEWED DISTRIBUTIONS; CHARACTERISTICS OF NORMAL AND SKEWED DISTRIBUTIONS ( A-level Psychology revision notes)

Many characteristics of populations follow a normal distribution: e.g. height, weight, shoe size, etc.

IQ scores are show a ‘normal’ distribution as below: i.e. most scores cluster around the mean average and as scores decrease or increase in either direction, fewer and fewer people possess these high or low scores. 68% of the population have an IQ between 85 and 115, only 2% of the population have an IQ between 130 and 145.

AQA A LEVEL PSYCHOLOGY POSITIVE + NEGATIVE SKEW DISTRIBUTIONS

However, distributions of characteristics in populations may be ‘skewed’ ( distorted in one direction of another). For example, salary in the UK is positively skewed : i.e. a small % of the population earn a very large salary. The IQs of children at a school for the gifted would be negatively skewed (i.e. few with a low IQ, lots with a high IQ).

AQA A LEVEL PSYCHOLOGY POSITIVE + NEGATIVE SKEW DISTRIBUTIONS

INFERENTIAL STATISTICS ( AQA A-level Psychology revision notes)

Although quantitative data can be analysed in fairly simple ways using measures of central tendency and dispersion, psychologists and scientists employ more complex statistical techniques to analyse results.

Experiments and correlation studies involve assessing whether

  • there is a significant difference between how the 2 conditions of the IV affect the DV
  • there is a significant correlation between 2 co-variables.

The term ‘significant’ can be thought of as referring to whether there is a real, interesting and important difference or correlation between variables.

For example, in the drink-driving study we may find a mean average score of 16/20 for the sober group and 9/20 for the alcohol group – clearly this is an important ‘significant’ difference. On the other hand if the scores were 14/20 and 11/20 we would be less sure if there was a real ‘significant’ difference between the groups.

At a basic level, statistical analysis is a tool to assess whether we have or have not found a significant difference or correlation in a study.

There are a number of different statistical tests that can be used to analyse data. Which test is appropriate to use is decided by

  • Whether the study is an experiment or a correlation study
  • Whether the study’s design is an independent groups design or a repeated measures design
  • Whether data is at the ordinal , nominal, interval or ratio level (see below)

AQA A LEVEL PSYCHOLOGY STATISTICS TEST

LEVELS OF DATA

Quantitative data comes in different forms/types.

  • Ordinal Data – scores which can be ranked from low to high: e.g. scores in an IQ test, memory test or personality questionnaire.
  • Nominal Data – data in the form of categories of information: for example, number of students studying particular participants in college.

For the following examples decide whether data is ordinal or nominal.

Height, eye colour, borough of London lived in, stress score, favourite animal, skill at driving, reaction speed.

  • Interval Data – Ordinal data which has either been separated into intervals: e.g. 0-5, 6-10, 11-15, 16-20, etc.

PROCEDURES FOR STATISTICS TESTS 

In the exam you are only required to know about how to conduct inferential statistics using the Sign Test , however all statistics tests follow the basic principles below.

  • Data from an experiment or correlation study is processed through a number of statistical/mathematical formulae. This will eventually produce one single number which ‘describes’ the data as a whole – this is referred to as the Calculated/Observed Value (OV)
  • The OV is then compared to a Critical Value (CV). This is a number found by cross-referencing certain information on a table of statistical significance .
  • Different statistics tests have different rules
  • In some tests if the OV > CV then the statistics test shows that we have found a significant difference/correlation and can, therefore, accept the experimental hypothesis . If the OV < CV we reject the experimental hypothesis .
  • In other tests the reverse is true: e.g. if OV < CV we accept the experimental and reject the null.
  • In the exam you will be told which of the 2 rules above applies to the statistics test concerned.
  • At a basic level, therefore, statistical analysis of data is a way of establishing whether we have or haven’t found significant results.

LEVELS OF STATISTICAL SIGNIFICANCE AND PROBABILITY (P)

In theory, psychologists/scientists never say that their findings are 100% accurate and true – there is always a probability that although results seem to indicate particular findings they are incorrect and findings have occurred by chance .

The concept of level of significance is used to indicate to readers to what percentage probability we can say that a particular set of findings are accurate and true, and to what extent results may have simply occurred due to chance .

For most pieces of psychological research a significance level of P < 0.05 is used. This indicates a 95% probability that results are accurate and true and a <5% probability that results occurred due to chance.

Higher levels of significance can be set when the accuracy of research findings is more important: e.g. in trials of a new drug. Thus findings which are significant at P < 0.01 mean that researchers are 99% confident results are true and there is only a 1% probability they occurred due to chance.

< m eans ‘the same as or less than’.

AQA A LEVEL PSYCHOLOGY LEVELS OF SIGNIFICANCE

Depending on the results of statistical analysis of data we may find that results are significant at any one of the above levels of probability. The higher the level of probability – the more significant, and therefore stronger, our results are.

TYPE 1 & TYPE 2 ERRORS

Type 1 errors – calling something true when it’s false.

When a statistics test indicates that the experimental hypothesis should be accepted, but in fact, the results are due to chance random factors . If the level of significance is set at 5%, there will always be a 1/20 chance of a type 1 error.

Clearly, the higher the level of significance (e.g. P < 0.1), the greater the chance that a type 1 error will occur (in this case 10%).

Type 2 errors – calling something false when it’s true.

When a statistics test indicates that the experimental hypothesis should be rejected, but in fact, the results are significant .

Clearly, the lower the level of significance (e.g. P < 0.005), the greater the chance that a type 2 error will occur.

 >>>>>>>>

THE SIGN TEST ( Psychology A-level revision)

The Sign Test is the 1 statistics test you do need to know how to fully conduct .

Signs tests are used in experiments with a repeated measures design and nominal data .

Example and procedures

We could conduct a study into whether there is a difference in people’s memory for a list of 10 words they’ve been read (DV = memory score x/10) depending on whether they heard the words in quiet conditions (1 st condition of the IV) or noisy conditions (2 nd condition of the IV). We would use a 1-tailed hypothesis for this study as previous research indicate that noise would disrupt memory ability

Once the experiment is conducted data (results from participants) is put into a results table .

AQA A LEVEL PSYCHOLOGY S TEST

Steps to calculate Sign Test

  • Subtract the score for the experimental condition from the control condition . If the result is negative add a minus – sign; if it’s positive add a + sign; if there’s no difference add a 0
  • Count the number of times the less frequent sign occurs . In the above example, the + sign is the least frequent. Call this value S . Therefore, S = 2
  • Count the total number of + and – signs . Call this value Therefore, N = 7
  • Decide whether a 1 or 2-tailed hypothesis was used . In the above example, we used a 1-tailed hypothesis.
  • Consult the table of statistical significance (below) for the Sign Test to find the critical value (CV).
  • Look down the left hand column marked N until you get to the total number of + and – In the case described N = 7 .
  • Cross reference N with the columns for either 1 or 2-tailed test (depending on whether your hypothesis is 1 or 2-tailed) and the Level of Significance value 05 (this is your Level of Significance – P < 0.05 ). In the case above this gives a value of 0 . Call this value the critical value (CV) . Therefore, CV = 0.
  • If the critical value ≥ S then we have found a significant difference between how the 2 conditions of the IV affected the DV: i.e. there is a significant difference in how noisy and quiet conditions affect memory ability. In the example above the critical value (CV = 0) is not greater than S (S = 2) therefore, we have not found a significant difference.

  Table of Critical Values for the Sign Test

AQA A LEVEL PSYCHOLOGY S TEST CRITICAL VALUES

psychology research methods review

Behavior Research Methods

  • An official publication of The Psychonomic Society
  • Focuses on the application of computer technology in psychological research.
  • Aims to improve cognitive-psychology research by making it more effective, less error-prone, and easier to run
  • Publishes work on new and improved tools, tutorials, articles and reviews that make existing practices more agile

Societies and partnerships

New Content Item

Latest issue

Volume 56, Issue 3

Latest articles

Connecting process models to response times through bayesian hierarchical regression analysis.

  • Thea Behrens
  • Adrian Kühn
  • Frank Jäkel

psychology research methods review

Measuring temporal bias in sequential numerosity comparison

  • Serena Dolfi
  • Alberto Testolin
  • Marco Zorzi

psychology research methods review

Decoding the essence of two-character Chinese words: Unveiling valence, arousal, concreteness, familiarity, and imageability through word norming

  • Yuen-Lai Chan
  • Chi-Shing Tse

psychology research methods review

Automatic object detection for behavioural research using YOLOv8

  • Frouke Hermens

psychology research methods review

Taboo language across the globe: A multi-lab study

  • Simone Sulpizio
  • Fritz Günther
  • Marco Marelli

psychology research methods review

Journal updates

Freelance submission systems administrator/submissions managing editor.

Springer and The Psychonomic Society are seeking a freelance submissions administrator/ managing editor (remote) for the Psychonomic Society portfolio of journals.

2023 Reviewer Acknowledgements

We acknowledge with gratitude the following Reviewers who contributed to the peer review process of Behavior Research Methods in 2023. We value your generous contributions.

2023 Psychonomic Society Best Article Award

Call for papers: “methodological challenges of complex latent mediator and moderator models”.

A Special Issue of Behavior Research Methods (BRM)

Click here for full details.

Journal information

  • Biological Abstracts
  • Current Contents/Social & Behavioral Sciences
  • Google Scholar
  • Japanese Science and Technology Agency (JST)
  • Norwegian Register for Scientific Journals and Series
  • OCLC WorldCat Discovery Service
  • Social Science Citation Index
  • TD Net Discovery Service
  • UGC-CARE List (India)

Rights and permissions

Editorial policies

© The Psychonomic Society, Inc.

  • Find a journal
  • Publish with us
  • Track your research

Psychology A Level

Overview – Research Methods

Research methods are how psychologists and scientists come up with and test their theories. The A level psychology syllabus covers several different types of studies and experiments used in psychology as well as how these studies are conducted and reported:

  • Types of psychological studies (including experiments , observations , self-reporting , and case studies )
  • Scientific processes (including the features of a study , how findings are reported , and the features of science in general )
  • Data handling and analysis (including descriptive statistics and different ways of presenting data ) and inferential testing

Note: Unlike all other sections across the 3 exam papers, research methods is worth 48 marks instead of 24. Not only that, the other sections often include a few research methods questions, so this topic is the most important on the syllabus!

psychology research methods review

Example question: Design a matched pairs experiment the researchers could conduct to investigate differences in toy preferences between boys and girls. [12 marks]

Types of study

There are several different ways a psychologist can research the mind, including:

  • Experiments
  • Observation
  • Self-reporting

Case studies

Each of these methods has its strengths and weaknesses. Different methods may be better suited to different research studies.

Experimental method

The experimental method looks at how variables affect outcomes. A variable is anything that changes between two situations ( see below for the different types of variables ). For example, Bandura’s Bobo the doll experiment looked at how changing the variable of the role model’s behaviour affected how the child played.

Experimental designs

Experiments can be designed in different ways, such as:

  • Independent groups: Participants are divided into two groups. One group does the experiment with variable 1, the other group does the experiment with variable 2. Results are compared.
  • Repeated measures: Participants are not divided into groups. Instead, all participants do the experiment with variable 1, then afterwards the same participants do the experiment with variable 2. Results are compared.

A matched pairs design is another form of independent groups design. Participants are selected. Then, the researchers recruit another group of participants one-by-one to match the characteristics of each member of the original group. This provides two groups that are relevantly similar and controls for differences between groups that might skew results. The experiment is then conducted as a normal independent groups design.

Types of experiment

Laboratory vs. field experiment.

Experiments are carried out in two different types of settings:

  • E.g. Bandura’s Bobo the doll experiment or Asch’s conformity experiments
  • E.g. Bickman’s study of the effects of uniforms on obedience

Strengths of laboratory experiment over field experiment:

The controlled environment of a laboratory experiment minimises the risk of other variables outside the researchers’ control skewing the results of the trial, making it more clear what (if any) the causal effects of a variable are. Because the environment is tightly controlled, any changes in outcome must be a result of a change in the variable.

Weaknesses of laboratory experiment over field experiment:

However, the controlled nature of a laboratory experiment might reduce its ecological validity . Results obtained in an artificial environment might not translate to real-life. Further, participants may be influenced by demand characteristics : They know they are taking part in a test, and so behave how they think they’re expected to behave rather than how they would naturally behave.

Natural and quasi experiment

Natural experiments are where variables vary naturally. In other words, the researcher can’t or doesn’t manipulate the variables . There are two types of natural experiment:

  • E.g. studying the effect a change in drug laws (variable) has on addiction
  • E.g. studying differences between men (variable) and women (variable)

Observational method

The observational method looks at and examines behaviour. For example, Zimbardo’s prison study observed how participants behaved when given certain social roles.

Observational design

Behavioural categories.

An observational study will use behavioural categories to prioritise which behaviours are recorded and ensure the different observers are consistent in what they are looking for.

For example, a study of the effects of age and sex on stranger anxiety in infants might use the following behavioural categories to organise observational data:

Rather than writing complete descriptions of behaviours, the behaviours can be coded into categories. For example, IS = interacted with stranger, and AS = avoided stranger. Researchers can also create numerical ratings to categorise behaviour, like the anxiety rating example above.

Inter-observer reliability : In order for observations to produce reliable findings, it is important that observers all code behaviour in the same way. For example, researchers would have to make it very clear to the observers what the difference between a ‘3’ on the anxiety scale above would be compared to a ‘7’. This inter-observer reliability avoids subjective interpretations of the different observers skewing the findings.

Event and time sampling

Because behaviour is constant and varied, it may not be possible to record every single behaviour during the observation period. So, in addition to categorising behaviour , study designers will also decide when to record a behaviour:

  • Event sampling: Counting how many times the participant behaves in a certain way.
  • Time sampling: Recording participant behaviour at regular time intervals. For example, making notes of the participant’s behaviour after every 1 minute has passed.

Note: Don’t get event and time sampling confused with participant sampling , which is how researchers select participants to study from a population.

Types of observation

Naturalistic vs. controlled.

Observations can be made in either a naturalistic or a controlled setting:

  • E.g. setting up cameras in an office or school to observe how people interact in those environments
  • E.g. Ainsworth’s strange situation or Zimbardo’s prison study

Covert vs. overt

Observations can be either covert or overt :

  • E.g. setting up hidden cameras in an office
  • E.g. Zimbardo’s prison study

Participant vs. non-participant

In observational studies, the researcher/observer may or may not participate in the situation being observed:

  • E.g. in Zimbardo’s prison study , Zimbardo played the role of prison superintendent himself
  • E.g. in Bandura’s Bobo the doll experiment and Ainsworth’s strange situation , the observers did not interact with the children being observed

Self-report method

Self-report methods get participants to provide information about themselves. Information can be obtained via questionnaires or interviews .

Types of self-report

Questionnaires.

A questionnaire is a standardised list of questions that all participants in a study answer. For example, Hazan and Shaver used questionnaires to collate self-reported data from participants in order to identify correlations between attachment as infants and romantic attachment as adults.

Questions in a questionnaire can be either open or closed :

  • >8 hours
  • E.g. “How did you feel when you thought you were administering a lethal shock?” or “What do you look for in a romantic partner and why?”

Strengths of questionnaires:

  • Quantifiable: Closed questions provide quantifiable data in a consistent format, which enables to statistically analyse information in an objective way.
  • Replicability: Because questionnaires are standardised (i.e. pre-set, all participants answer the same questions), studies involving them can be easily replicated . This means the results can be confirmed by other researchers, strengthening certainty in the findings.

Weaknesses of questionnaires:

  • Biased samples: Questionnaires handed out to people at random will select for participants who actually have the time and are willing to complete the questionnaire. As such, the responses may be biased towards those of people who e.g. have a lot of spare time.
  • Dishonest answers: Participants may lie in their responses – particularly if the true answer is something they are embarrassed or ashamed of (e.g. on controversial topics or taboo topics like sex)
  • Misunderstanding/differences in interpretation: Different participants may interpret the same question differently. For example, the “are you religious?” example above could be interpreted by one person to mean they go to church every Sunday and pray daily, whereas another person may interpret religious to mean a vague belief in the supernatural.
  • Less detail: Interviews may be better suited for detailed information – especially on sensitive topics – than questionnaires. For example, participants are unlikely to write detailed descriptions of private experiences in a questionnaire handed to them on the street.

In an interview , participants are asked questions in person. For example, Bowlby interviewed 44 children when studying the effects of maternal deprivation.

Interviews can be either structured or unstructured :

  • Structured interview: Questions are standardised and pre-set. The interviewer asks all participants the same questions in the same order.
  • Unstructured interview: The interviewer discusses a topic with the participant in a less structured and more spontaneous way, pursuing avenues of discussion as they come up.

Interviews can also be a cross between the two – these are called semi-structured interviews .

Strengths of interviews:

  • More detail: Interviews – particularly unstructured interviews conducted by a skilled interviewer – enable researchers to delve deeper into topics of interest, for example by asking follow-up questions. Further, the personal touch of an interviewer may make participants more open to discussing personal or sensitive issues.
  • Replicability: Structured interviews are easily replicated because participants are all asked the same pre-set list of questions. This replicability means the results can be confirmed by other researchers, strengthening certainty in the findings.

Weaknesses of interviews:

  • Lack of quantifiable data: Although unstructured interviews enable researchers to delve deeper into interesting topics, this lack of structure may produce difficulties in comparing data between participants. For example, one interview may go down one avenue of discussion and another interview down a different avenue. This qualitative data may make objective or statistical analysis difficult.
  • Interviewer effects : The interviewer’s appearance or character may bias the participant’s answers. For example, a female participant may be less comfortable answering questions on sex asked by a male interviewer and and thus give different answers than if she were asked by a female interviewer.

Note: This topic is A level only, you don’t need to learn about case studies if you are taking the AS exam only.

Case studies are detailed investigations into an individual, a group of people, or an event. For example, the biopsychology page describes a case study of a young boy who had the left hemisphere of his brain removed and the effects this had on his language skills.

In a case study, researchers use many of the methods described above – observation , questionnaires , interviews – to gather data on a subject. However, because case studies are studies of a single subject, the data they provide is primarily qualitative rather than quantitative . This data is then used to build a case history of the subject. Researchers then interpret this case history to draw their conclusions.

Types of case study

Typical vs. unusual cases.

Most case studies focus on unusual individuals, groups, and events.

Longitudinal

Many case studies are longitudinal . This means they take place over an extended time period, with researchers checking in with the subject at various intervals. For example, the case study of the boy who had his left hemisphere removed collected data on the boy’s language skills at ages 2.5, 4, and 14 to see how he progressed.

Strengths of case studies:

  • Provides detailed qualitative data: Rather than focusing on one or two aspects of behaviour at a single point in time (e.g. in an experiment ), case studies produce detailed qualitative data.
  • Allows for investigation into issues that may be impractical or unethical to study otherwise. For example, it would be unethical to remove half a toddler’s brain just to experiment , but if such a procedure is medically necessary then researchers can use this opportunity to learn more about the brain.

Weaknesses of case studies:

  • Lack of scientific rigour: Because case studies are often single examples that cannot be replicated , the results may not be valid when applied to the general population.
  • Researcher bias: The small sample size of case studies also means researchers need to apply their own subjective interpretation when drawing conclusions from them. As such, these conclusions may be skewed by the researcher’s own bias and not be valid when applied more generally. This criticism is often directed at Freud’s psychoanalytic theory because it draws heavily on isolated case studies of individuals.

Scientific processes

This section looks at how science works more generally – in particular how scientific studies are organised and reported . It also covers ways of evaluating a scientific study.

Study features and design

Studies will usually have an aim . The aim of a study is a description of what the researchers are investigating and why . For example, “to investigate the effect of SSRIs on symptoms of depression” or “to understand the effect uniforms have on obedience to authority”.

Studies seek to test a hypothesis . The experimental/alternate hypothesis of a study is a testable prediction of what the researchers expect to happen.

  • E.g. “That SSRIs will reduce symptoms of depression” or “subjects are more likely to comply when orders are issued by someone wearing a uniform”.
  • E.g. “That SSRIs have no effect on symptoms on depression” or “subject conformity will be the same when orders are issued by someone wearing a uniform as when orders are issued by someone bot wearing a uniform”

Either the experimental/alternate hypothesis or the null hypothesis will be supported by the results of the experiment.

It’s often not possible or practical to conduct research on everyone your study is supposed to apply to. So, researchers use sampling to select participants for their study.

  • E.g. all humans, all women, all men, all children, etc.
  • E.g. 10,000 humans, 200 women from the USA, children at a certain school

For example, the target population (i.e. who the results apply to) of Asch’s conformity experiments is all humans – but Asch didn’t conduct the experiment on that many people! Instead, Asch recruited 123 males and generalised the findings from this sample to the rest of the population.

Researchers choose from different sampling techniques – each has strengths and weaknesses.

Sampling techniques

Random sampling.

The random sampling method involves selecting participants from a target population at random – such as by drawing names from a hat or using a computer program to select them. This method means each member of the population has an equal chance of being selected and thus is not subject to any bias.

Strengths of random sampling:

  • Unbiased: Selecting participants by random chance reduces the likelihood that researcher bias will skew the results of the study.
  • Representative: If participants are selected at random – particularly if the sample size is large – it is likely that the sample will be representative of the population as a whole. For example, if the ratio of men:women in a population is 50:50 and participants are selected at random, it is likely that the sample will also have a ratio of men to women that is 50:50.

Weaknesses of random sampling:

  • Impractical: It’s often impractical/impossible to include all members of a target population for selection. For example, it wouldn’t be feasible for a study on women to include the name of every woman on the planet for selection. But even if this was done, the randomly selected women may not agree to take part in the study anyway.

Systematic sampling

The systematic sampling method involves selecting participants from a target population by selecting them at pre-set intervals. For example, selecting every 50th person from a list, or every 7th, or whatever the interval is.

Strengths of systematic sampling:

  • Unbiased and representative: Like random sampling , selecting participants according to a numerical interval provides an objective means of selecting participants that prevents researcher bias being able to skew the sample. Further, because the sampling method is independent of any particular characteristic (besides the arbitrary characteristic of the participant’s order in the list) this sample is likely to be representative of the population as a whole.

Weaknesses of systematic sampling:

  • Unexpected bias: Some characteristics could occur more or less frequently at certain intervals, making a sample that is selected based on that interval biased. For example, houses tend to be have even numbers on one side of a road and odd numbers on the other. If one side of the road is more expensive than the other and you select every 4th house, say, then you will only select even numbers from one side of the road – and this sample may not be representative of the road as a whole.

Stratified sampling

The stratified sampling method involves dividing the population into relevant groups for study, working out what percentage of the population is in each group, and then randomly sampling the population according to these percentages.

For example, let’s say 20% of the population is aged 0-18, and 50% of the population is aged 19-65, and 30% of the population is aged >65. A stratified sample of 100 participants would randomly select 20x 0-18 year olds, 50x 19-65 year olds, and 30x people over 65.

Strengths of stratified sampling:

  • Representative: The stratification is deliberately designed to yield a sample that is representative of the population as a whole. You won’t get people with certain characteristics being over- or under-represented within the sample.
  • Unbiased: Because participants within each group are selected randomly , researcher bias is unable to skew who is included in the study.

Weaknesses of stratified sampling:

  • Requires knowledge of population breakdown: Researchers need to accurately gauge what percentage of the population falls into what group. If the researchers get these percentages wrong, the sample will be biased and some groups will be over- or under-represented.

Opportunity and volunteer sampling

The opportunity and volunteer sampling methods:

  • E.g. Approaching people in the street and asking them to complete a questionnaire.
  • E.g. Placing an advert online inviting people to complete a questionnaire.

Strengths of opportunity and volunteer sampling:

  • Quick and easy: Approaching participants ( opportunity sampling) or inviting participants ( volunteer sampling) is quick and straightforward. You don’t have to spend time compiling details of the target population (like in e.g. random or systematic sampling ), nor do you have to spend time dividing participants according to relevant categories (like in stratified sampling ).
  • May be the only option: With natural experiments – where a variable changes as a result of something outside the researchers’ control – opportunity sampling may be the only viable sampling method. For example, researchers couldn’t randomly sample 10 cities from all the cities in the world and change the drug laws in those cities to see the effects – they don’t have that kind of power. However, if a city is naturally changing its drug laws anyway, researchers could use opportunity sampling to study that city for research.

Weaknesses of opportunity and volunteer sampling:

  • Unrepresentative: The pool of participants will likely be biased towards certain kinds of people. For example, if you conduct opportunity sampling on a weekday at 10am, this sample will likely exclude people who are at work. Similarly, volunteer sampling is likely to exclude people who are too busy to take part in the study.

Independent vs. dependent variables

If the study involves an experiment , the researchers will alter an independent variable to measure its effects on a dependent variable :

  • E.g. In Bickman’s study of the effects of uniforms on obedience , the independent variable was the uniform of the person giving orders.
  • E.g. In Bickman’s study of the effects of uniforms on obedience , the dependent variable was how many people followed the orders.

Extraneous and confounding variables

In addition to the variables actually being investigated ( independent and dependent ), there may be additional (unwanted) variables in the experiment. These additional variables are called extraneous variables .

Researchers must control for extraneous variables to prevent them from skewing the results and leading to false conclusions. When extraneous variables are not properly controlled for they are known as confounding variables .

For example, if you’re studying the effect of caffeine on reaction times, it might make sense to conduct all experiments at the same time of day to prevent this extraneous variable from confounding the results. Reaction times change throughout the day and so if you test one group of subjects at 3pm and another group right before they go to bed, you may falsely conclude that the second group had slower reaction times.

Operationalisation of variables

Operationalisation of variables is where researchers clearly and measurably define the variables in their study.

For example, an experiment on the effects of sleep ( independent variable ) on anxiety ( dependent variable ) would need to clearly operationalise each variable. Sleep could be defined by number of hours spent in bed, but anxiety is a bit more abstract and so researchers would need to operationalise (i.e. define) anxiety such that it can be quantified in a measurable and objective way.

If variables are not properly operationalised, the experiment cannot be properly replicated , experimenters’ subjective interpretations may skew results, and the findings may not be valid .

Pilot studies

A pilot study is basically a practice run of the proposed research project. Researchers will use a small number of participants and run through the procedure with them. The purpose of this is to identify any problems or areas for improvement in the study design before conducting the research in full. A pilot study may also give an early indication of whether the results will be statistically significant .

For example, if a task is too easy for participants, or it’s too obvious what the real purpose of an experiment is, or questions in a questionnaire are ambiguous, then the results may not be valid . Conducting a pilot study first may save time and money as it enables researchers to identify and address such issues before conducting the full study on thousands of participants.

Study reporting

Features of a psychological report.

The report of a psychological study (research paper) typically contains the following sections in the following order:

  • Title: A short and clear description of the research.
  • Abstract: A summary of the research. This typically includes the aim and hypothesis , methods, results, and conclusion.
  • Introduction: Funnel technique: Broad overview of the context (e.g. current theories, previous studies, etc.) before focusing in on this particular study, why it was conducted, its aims and hypothesis .
  • Study design: This will explain what method was used (e.g. experiment or observation ), how the study was designed (e.g. independent groups or repeated measures ), and identification and operationalisation of variables .
  • Participants: A description of the target population to be studied, the sampling method , how many participants were included.
  • Equipment used: A description of any special equipment used in the study and how it was used.
  • Standardised procedure: A detailed step-by-step description of how the study was conducted. This allows for the study to be replicated by other researchers.
  • Controls : An explanation of how extraneous variables were controlled for so as to generate accurate results.
  • Results: A presentation of the key findings from the data collected. This is typically written summaries of the raw data ( descriptive statistics ), which may also be presented in tables , charts, graphs , etc. The raw data itself is typically included in appendices.
  • Discussion: An explanation of what the results mean and how they relate to the experimental hypothesis (supporting or contradicting it), any issues with how results were generated, how the results fit with other research, and suggestions for future research.
  • Conclusion: A short summary of the key findings from the study.
  • Book: Milgram, S., 2010. Obedience to Authority . 1st ed. Pinter & Martin.
  • Journal article: Bandura, A., Ross, D. and Ross, S., 1961. Transmission of Aggression through Imitation of Aggressive Models . The Journal of Abnormal and Social Psychology, 63(3), pp.575-582.
  • Appendices: This is where you put any supporting materials that are too detailed or long to include in the main report. For example, the raw data collected from a study, or the complete list of questions in a questionnaire .

Peer review

Peer review is a way of assessing the scientific credibility of a research paper before it is published in a scientific journal. The idea with peer review is to prevent false ideas and bad research from being accepted as fact.

It typically works as follows: The researchers submit their paper to the journal they want it to be published in, and the editor of that journal sends the paper to expert reviewers (i.e. psychologists who are experts in that area – the researchers’ ‘peers’) who evaluate the paper’s scientific validity. The reviewers may accept the paper as it is, accept it with a few changes, reject it and suggest revisions and resubmission at a later date, or reject it completely.

There are several different methods of peer review:

  • Open review: The researchers and the reviewers are known to each other.
  • Single-blind: The researchers do not know the names of the reviewers. This prevents the researchers from being able to influence the reviewer. This is the most common form of peer review.
  • Double-blind: The researchers do not know the names of the reviewers, and the reviewers do not know the names of the researchers. This additionally prevents the reviewer’s bias towards the researcher from influencing their decision whether to accept their paper or not.

Criticisms of peer review:

  • Bias: There are several ways peer review can be subject to bias. For example, academic research (particularly in niche areas) takes place among a fairly small circle of people who know each other and so these relationships may affect publication decisions. Further, many academics are funded by organisations and companies that may prefer certain ideas to be accepted as scientifically legitimate, and so this funding may produce conflicts of interest.
  • Doesn’t always prevent fraudulent/bad research from being published: There are many examples of fraudulent research passing peer review and being published (see this Wikipedia page for examples).
  • Prevents progress of new ideas: Reviewers of papers are typically older and established academics who have made their careers within the current scientific paradigm. As such, they may reject new or controversial ideas simply because they go against the current paradigm rather than because they are unscientific.
  • Plagiarism: In single-blind and double-blind peer reviews, the reviewer may use their anonymity to reject or delay a paper’s publication and steal the good ideas for themself.
  • Slow: Peer review can mean it takes months or even years between the researcher submitting a paper and its publication.

Study evaluation

In psychological studies, ethical issues are questions of what is morally right and wrong. An ethically-conducted study will protect the health and safety of the participants involved and uphold their dignity, privacy, and rights.

To provide guidance on this, the British Psychological Association has published a code of human research ethics :

  • Participants are told the project’s aims , the data being collected, and any risks associated with participation.
  • Participants have the right to withdraw or modify their consent at any time.
  • Researchers can use incentives (e.g. money) to encourage participation, but these incentives can’t be so big that they would compromise a participant’s freedom of choice.
  • Researchers must consider the participant’s ability to consent (e.g. age, mental ability, etc.)
  • Prior (general) consent: Informing participants that they will be deceived without telling them the nature of the deception. However, this may affect their behaviour as they try to guess the real nature of the study.
  • Retrospective consent: Informing participants that they were deceived after the study is completed and asking for their consent. The problem with this is that if they don’t consent then it’s too late.
  • Presumptive consent: Asking people who aren’t participating in the study if they would be willing to participate in the study. If these people would be willing to give consent, then it may be reasonable to assume that those taking part in the study would also give consent.
  • Confidentiality: Personal data obtained about participants should not be disclosed (unless the participant agreed to this in advance). Any data that is published will not be publicly identifiable as the participant’s.
  • Debriefing: Once data gathering is complete, researchers must explain all relevant details of the study to participants – especially if deception was involved. If a study might have harmed the individual (e.g. its purpose was to induce a negative mood), it is ethical for the debrief to address this harm (e.g. by inducing a happy mood) so that the participant does not leave the study in a worse state than when they entered.

Reliability

Study results are reliable if the same results can be consistently replicated under the same circumstances. If results are inconsistent then the study is unreliable.

Note: Just because a study is reliable, its results are not automatically valid . A broken tape measure may reliably (i.e. consistently) record a person’s height as 200m, but that doesn’t mean this measurement is accurate.

There are several ways researchers can assess a study’s reliability:

Test-retest

Test-retest is when you give the same test to the same person on two different occasions. If the results are the same or similar both times, this suggests they are reliable.

For example, if your study used scales to measure participants’ weight, you would expect the scales to record the same (or a very similar) weight for the same person in the morning as in the evening. If the scales said the person weighed 100kg more later that same day, the scales (and therefore the results of the study) would be unreliable.

Inter-observer

Inter-observer reliability is a way to test the reliability of observational studies .

For example, if your study required observers to assess participants’ anxiety levels, you would expect different observers to grade the same behaviour in the same way. If one observer rated a participant’s behaviour a 3 for anxiety, and another observer rated the exact same behaviour an 8, the results would be unreliable.

Inter-observer reliability can be assessed mathematically by looking for correlation between observers’ scores. Inter-observer reliability can be improved by setting clearly defined behavioural categories .

Study results are valid if they accurately measure what they are supposed to. There are several ways researchers can assess a study’s validity:

  • E.g. let’s say you come up with a new test to measure participants’ intelligence levels. If participants scoring highly on your test also scored highly on a standardised IQ test and vice versa, that would suggest your test has concurrent validity because participants’ scores are correlated with a known accurate test.
  • E.g. a study that measures participants’ intelligence levels by asking them when their birthday is would not have face validity. Getting participants to complete a standardised IQ test would have greater face validity.
  • E.g. let’s say your study was supposed to measure aggression levels in response to someone annoying. If the study was conducted in a lab and the participant knew they were taking part in a study, the results probably wouldn’t have much ecological validity because of the unrealistic environment.
  • E.g. a study conducted in 1920 that measured participants’ attitudes towards social issues may have low temporal validity because societal attitudes have changed since then.

Control of extraneous variables

There are several different types of extraneous variables that can reduce the validity of a study. A well-conducted psychological study will control for these extraneous variables so that they do not skew the results.

Demand characteristics

Demand characteristics are extraneous variables where the demands of a study make participants behave in ways they wouldn’t behave outside of the study. This reduces the study’s ecological validity .

For example, if a participant guesses the purpose of an experiment they are taking part in, they may try to please the researcher by behaving in the ‘right’ way rather than the way they would naturally. Alternatively, the participant might rebel against the study and deliberately try to sabotage it (e.g. by deliberately giving wrong answers).

In some study designs, researchers can control for demand characteristics using single- blind methods. For example, a drug trial could give half the participants the actual drug and the other half a placebo but not tell participants which treatment they received. This way, both groups will have equal demand characteristics and so any differences between them should be down to the drug itself.

Investigator effects

Investigator effects are another extraneous variable where the characteristics of the researcher affect the participant’s behaviour. Again, this reduces the study’s ecological validity .

Many characteristics – e.g. the researcher’s age, gender, accent, what they’re wearing – could potentially influence the participant’s responses. For example, in an interview about sex, females may feel less comfortable answering questions asked by a male interviewer and thus give different answers than if they were asked by a female. The researcher’s biases may also come across in their body language or tone of voice, affecting the participant’s responses.

In some study designs, researchers can control for demand characteristics using double- blind methods. In a double-blind drug trial, for example, neither the participants nor the researchers know which participants get the actual drug and which get the placebo. This way, the researcher is unable to give any clues (consciously or unconsciously) to participants that would affect their behaviour.

Participant variables

Participant variables are differences between participants. These can be controlled for by random allocation .

For example, in an experiment on the effect of caffeine on reaction times, participants would be randomly allocated into either the caffeine group or the non-caffeine group. A non -random allocation method, such as allocating caffeine to men and placebo to women, could mean variables in the allocation method (in this case gender) skew the results. When participants are randomly allocated, any extraneous variables (e.g. gender in this case) will be allocated evenly between each group and so not skew the results of one group more than the other.

Situational variables

Situational variables are the environment the experiment is conducted in. These can be controlled for by standardisation .

For example, all the tests of caffeine on reaction times would be conducted in the same room, at the same time of day, using the same equipment, and so on to prevent these features of the environment from skewing the results.

In a repeated measures experiment, researchers may use counterbalancing to control for the order in which tasks are completed.

For example, half of participants would do task A followed by task B, and the other half would do task B followed by task A.

Implications of psychological research for the economy

Psychological research often has practical applications in real life. The following are some examples of how psychological findings may affect the economy:

  • Attachment : Bowlby’s maternal deprivation hypothesis suggests that periods of extended separation between mother and child before age 3 are harmful to the child’s psychological development. And if mothers stay at home during this period, they can’t go out to work. However, some more recent research challenges Bowlby’s conclusions, suggesting that substitutes (e.g. the father , or nursery care) can care for the child, allowing the mother to go back to work sooner and remain economically active.
  • Depression : Psychological research has found effective therapies for treating depression, such as cognitive behavioural therapy and SSRIs. The benefits of such therapies – if they are effective – are likely to outweigh the costs because they enable the person to return to work and pay taxes, as well avoiding long-term costs to the health service.
  • OCD : Similar to above: Drug therapies (e.g. SSRIs) and behavioural approaches (e.g. CBT) may alleviate OCD symptoms, enabling OCD sufferers to return to work, pay taxes, and avoid reliance on healthcare services.
  • Memory : Public money is required to fund police investigations. Psychological tools, such as the cognitive interview , have improved the accuracy of eyewitness testimonies, which equates to more efficient use of police time and resources.

Features of science

Theory construction and hypothesis testing.

Science works by making empirical observations of the world, formulating hypotheses /theories that explain these observations, and repeatedly testing these hypotheses /theories via experimentation.

  • E.g. A tape measure provides a more objective measurement of something compared to a researcher’s guess. Similarly, a set of scales is a more objective way of determining which of two objects is heavier than a researcher lifting each up and giving their opinion.
  • E.g. Burger (2009) replicated Milgram’s experiments with similar results.
  • E.g. The hypothesis that “water boils at 100°c” could be falsified by an experiment where you heated water to 999°c and it didn’t boil. In contrast, “everything doubles in size every 10 seconds” could not be falsified by any experiment because whatever equipment you used to measure everything would also double in size.
  • Freud’s psychodynamic theories are often criticised for being unfalsifiable: There’s not really any observations that could disprove them because every possible behaviour (e.g. crying or not crying) could be explained as the result of some unconscious thought process.

Paradigm shifts

Philosopher Thomas Kuhn argues that science is not as unbiased and objective as it seems. Instead, the majority of scientists just accept the existing scientific theories (i.e. the existing paradigm) as true and then find data that supports these theories while ignoring/rejecting data that refutes them.

Rarely, though, minority voices are able to successfully challenge the existing paradigm and replace it with a new one. When this happens it is a paradigm shift . An example of a paradigm shift in science is that from Newtonian gravity to Einstein’s theory of general relativity.

Data handling and analysis

Types of data, quantitative vs. qualitative.

Data from studies can be quantitative or qualitative :

  • Quantitative: Numerical
  • Qualitative: Non-numerical

For example, some quantitative data in the Milgram experiment would be how many subjects delivered a lethal shock. In contrast, some qualitative data would be asking the subjects afterwards how they felt about delivering the lethal shock.

Strengths of quantitative data / weaknesses of qualitative data:

  • Can be compared mathematically and scientifically: Quantitative data enables researchers to mathematically and objectively analyse data. For example, mood ratings of 7 and 6 can be compared objectively, whereas qualitative assessments such as ‘sad’ and ‘unhappy’ are hard to compare scientifically.

Weaknesses of quantitative data / strengths of qualitative data:

  • Less detailed: In reducing data to numbers and narrow definitions, quantitative data may miss important details and context.

Content analysis

Although the detail of qualitative data may be valuable, this level of detail can also make it hard to objectively or mathematically analyse. Content analysis is a way of analysing qualitative data. The process is as follows:

  • E.g. A bunch of unstructured interviews on the topic of childhood
  • E.g. Discussion of traumatic events, happy memories, births, and deaths
  • E.g. Researchers listen to the unstructured interviews and count how often traumatic events are mentioned
  • Statistical analysis is carried out on this data

Primary vs. secondary

Researchers can produce primary data or use secondary data to achieve the research aims of their study:

  • Primary data: Original data collected for the study
  • Secondary data: Data from another study previously conducted

Meta-analysis

A meta-analysis is a study of studies. It involves taking several smaller studies within a certain research area and using statistics to identify similarities and trends within those studies to create a larger study.

We have looked at some examples of meta-analyses elsewhere in the course such as Van Ijzendoorn’s meta-analysis of several strange situation studies and Grootheest et al’s meta-analysis of twin studies on OCD .

A good meta-analysis is often more reliable than a regular study because it is based on a larger data set, and any issues with one single study will be balanced out by the other studies.

Descriptive statistics

Measures of central tendency: mean, median, mode.

Mean , median , and mode are measures of central tendency . In other words, they are ways of reducing large data sets into averages .

The mean is calculated by adding all the numbers in a set together and dividing the total by the number of numbers.

  • Example set: 22, 78, 3, 33, 90
  • 22+78+3+33+90=226
  • The mean is 45.2
  • Uses all data in the set.
  • Accurate: Provides a precise number based on all the data in a set.

Weaknesses:

  • E.g.: 1, 3, 2, 5, 9, 4, 913 <- the mean is 133.9, but the 913 could be a measurement error or something and thus the mean is not representative of the data set

The median is calculated by arranging all the numbers in a set from smallest to biggest and then finding the number in the middle. Note: If the total number of numbers is odd, you just pick the middle one. But if the total number of numbers is even, you take the mid-point between the two numbers in the middle.

  • Example set: 20, 66, 85, 45, 18, 13, 90, 28, 9
  • 9, 13, 18, 20, 28 , 45, 66, 85, 90
  • The median is 28
  • Won’t be skewed by freak scores (unlike the mean).
  • E.g.: 1, 1, 3 , 9865, 67914 <- 3 is not really representative of the larger numbers in the set.
  • Less accurate/sensitive than the mean.

The mode is calculated by counting which is the most commonly occurring number in a set.

  • Example set: 7, 7, 20 , 16, 1, 20 , 25, 16, 20 , 9
  • There are two 7’s, but three 20’s
  • The mode is 20
  • Makes more sense for presenting the central tendency in data sets with whole numbers. For example, the average number of limbs for a human being will have a mean of something like 3.99, but a mode of 4.
  • Does not use all the data in a set.
  • A data set may have more than one mode.

Measures of dispersion: Range and standard deviation

Range and standard deviation are measures of dispersion . In other words, they quantify how much scores in a data set vary .

The range is calculated by subtracting the smallest number in the data set from the largest number.

  • Example set: 59, 8, 7, 84, 9, 49, 14, 75, 88, 11
  • The largest number is 88
  • The smallest number is 7
  • The range is 81
  • Easy and quick to calculate: You just subtract one number from another
  • Accounts for freak scores (highest and lowest)
  • Can be skewed by freak scores: The difference between the biggest and smallest numbers can be skewed by a single anomalous result or error, which may give an exaggerated impression of the data distribution compared to standard deviation .
  • 4, 4, 5, 5, 5, 6, 6, 7, 19
  • 4, 16, 16, 17, 17, 17, 18, 19 19

Standard deviation

The standard deviation (σ) is a measure of how much numbers in a data set deviate from the mean (average). It is calculated as follows:

  • Example data set: 59, 79, 43, 42, 81, 100, 38, 54, 92, 62
  • Calculate the mean (65)
  • -6, 14, -22, -23, 16, 35, -27, -11, 27, -3
  • 36, 196, 484, 529, 256, 1225, 729, 121, 729, 9
  • 36+196+484+529+256+1225+729+121+729+9=4314
  • 4314/10=431.4
  • √431.4=20.77
  • The standard deviation is 20.77

Note: This method of standard deviation is based on the entire population. There is a slightly different method for calculating based on a sample where instead of dividing by the number of numbers in the second to last step, you divide by the number of numbers-1 (in this case 4314/9=479.333). This gives a standard deviation of 21.89.

  • Is less skewed by freak scores: Standard deviation measures the average difference from the mean and so is less likely to be skewed by a single freak score (compared to the range ).
  • Takes longer to calculate than the range .

Percentages

A percentage (%) describes how much out of 100 something occurs. It is calculated as follows:

  • Example: 63 out of a total of 82 participants passed the test
  • 63/82=0.768
  • 0.768*100=76.8
  • 76.8% of participants passed the test

Percentage change

To calculate a percentage change, work out the difference between the original number and the after number, divide that difference by the original number, then multiply the result by 100:

  • Example: He got 80 marks on the test but after studying he got 88 marks on the test
  • His test score increased by 10% after studying

Normal and skewed distributions

Normal distribution.

A data set that has a normal distribution will have the majority of scores on or near the mean average. A normal distribution is also symmetrical: There are an equal number of scores above the mean as below it. In a normal distribution, scores become rarer and rarer the more they deviate from the mean.

An example of a normal distribution is IQ scores. As you can see from the histogram below, there are as many IQ scores below the mean as there are above the mean :

statistical infrequency bell curve

When plotted on a histogram , data that follows a normal distribution will form a bell-shaped curve like the one above.

Skewed distribution

positive skew and negative skew histograms

Skewed distributions are caused by outliers: Freak scores that throw off the mean . Skewed distributions can be positive or negative :

  • Mean > Median > Mode
  • Mean < Median < Mode

Correlation

Correlation refers to how closely related two (or more) things are related. For example, hot weather and ice cream sales may be positively correlated: When hot weather goes up, so do ice cream sales.

Correlations are measured mathematically using correlation coefficients (r). A correlation coefficient will be anywhere between +1 and -1:

  • r=+1 means two things are perfectly positively correlated: When one goes up , so does the other by the same amount
  • r=-1 means two things perfectly negatively correlated: When one goes up , the other goes down by the same amount
  • r=0 means two things are not correlated at all: A change in one is totally independent of a change in the other

The following scattergrams illustrate various correlation coefficients:

correlation coefficient scatter graph examples

Presentation of data

table example

For example, the behavioural categories table above presents the raw data of each student in this made-up study. But in the results section, researchers might include another table that compares average anxiety rating scores for males and females.

Scattergrams

scattergram example

For example, each dot on the correlation scattergram opposite could represent a student. The x-axis could represent the number of hours the student studied, and the y-axis could represent the student’s test score.

eyewitness testimony loftus and palmer

For example, the results of Loftus and Palmer’s study into the effects of different leading questions on memory could be presented using the bar chart above. It’s not like there are categories in-between ‘contacted’ and ‘hit’, so the bars have gaps between them (unlike a histogram ).

A histogram is a bit like a bar chart but is used to illustrate continuous or interval data (rather than discrete data or whole numbers).

histogram example

Because the data on the x axis is continuous, there are no gaps between the bars.

line graph example

For example, the line graph above illustrates 3 different people’s progression in a strength training program over time.

pie chart example

For example, the frequency with which different attachment styles occurred in Ainsworth’s strange situation could be represented by the pie chart opposite.

Inferential testing

Probability and significance.

The point of inferential testing is to see whether a study’s results are statistically significant , i.e. whether any observed effects are as a result of whatever is being studied rather than just random chance.

For example, let’s say you are studying whether flipping a coin outdoors increases the likelihood of getting heads. You flip the coin 100 times and get 52 heads and 48 tails. Assuming a baseline expectation of 50:50, you might take these results to mean that flipping the coin outdoors does increase the likelihood of getting heads. However, from 100 coin flips, a ratio of 52:48 between heads and tails is not very significant and could have occurred due to luck. So, the probability that this difference in heads and tails is because you flipped the coin outside (rather than just luck) is low.

Probability is denoted by the symbol p . The lower the p value, the more statistically significant your results are. You can never get a p value of 0, though, so researchers will set a threshold at which point the results are considered statistically significant enough to reject the null hypothesis . In psychology, this threshold is usually <0.05, which means there is a less than 5% chance the observed effect is due to luck and a >95% chance it is a real effect.

Type 1 and type 2 errors

When interpreting statistical significance, there are two types of errors:

  • E.g. The p threshold is <0.05, but the researchers’ results are among the 5% of fluke outcomes that look significant but are just due to luck
  • E.g. The p threshold is set too low (e.g. <0.01), and the data falls short (e.g. p=<0.02)

Increasing the sample size reduces the likelihood of type 1 and type 2 errors.

Key maths skills made easy!

psychology research methods maths skills revision guide

Types of statistical test

Note: The inferential tests below are needed for A level only, if you are taking the AS exam , you only need to know the sign test .

There are several different types of inferential test in addition to the sign test . Which inferential test is best for a study will depend on the following three criteria:

  • Whether you are looking for a difference or a correlation
  • E.g. at the competition there were 8 runners, 12 swimmers, and 6 long jumpers (it’s not like there are in-between measurements between ‘swimmer’ and ‘runner’)
  • E.g. First, second, and third place in a race
  • E.g. Ranking your mood on a scale of 1-10
  • E.g. Weights in kg
  • E.g. Heights in cm
  • E.g. Times in seconds
  • Whether the experimental design is related (i.e. repeated measures ) or unrelated (i.e. independent groups )

The following table shows which inferential test is appropriate according to these criteria:

Note: You won’t have to work out all these tests from scratch, but you may need to:

  • Say which of the statistical tests is appropriate (i.e. based on whether it’s a difference or correlation; whether the data is nominal, ordinal, or interval; and whether the data is related or unrelated).
  • Identify the critical value from a critical values table and use this to say whether a result (which will be given to you in the exam) is statistically significant.

The sign test

The sign test is a way to calculate the statistical significance of differences between related pairs (e.g. before and after in a repeated measures experiment ) of nominal data. If the observed value (s) is equal or less than the critical value (cv), the results are statistically significant.

Example: Let’s say we ran an experiment on 10 participants to see whether they prefer movie A or movie B .

  • n = 9 (because even though there are 10 participants, one participant had no change so we exclude them from our calculation)
  • In this case our experimental hypothesis is two-tailed: Participants may prefer movie A or movie B
  • (The null hypothesis is that participants like both movies equally)
  • In this case, let’s say it’s 0.1
  • The experimental hypothesis is two-tailed
  • So, in this example, our critical value (cv) is 1
  • In this example, there are 2 As, so our observed value (s) is 2
  • In this example, the observed value (2) is greater than the critical value (1) and so the results are not statistically significant. This means we must accept the null hypothesis and reject the experimental hypothesis .

<<<Biopsychology

  • Open access
  • Published: 16 May 2024

Procrastination, depression and anxiety symptoms in university students: a three-wave longitudinal study on the mediating role of perceived stress

  • Anna Jochmann 1 ,
  • Burkhard Gusy 1 ,
  • Tino Lesener 1 &
  • Christine Wolter 1  

BMC Psychology volume  12 , Article number:  276 ( 2024 ) Cite this article

148 Accesses

Metrics details

It is generally assumed that procrastination leads to negative consequences. However, evidence for negative consequences of procrastination is still limited and it is also unclear by which mechanisms they are mediated. Therefore, the aim of our study was to examine the harmful consequences of procrastination on students’ stress and mental health. We selected the procrastination-health model as our theoretical foundation and tried to evaluate the model’s assumption that trait procrastination leads to (chronic) disease via (chronic) stress in a temporal perspective. We chose depression and anxiety symptoms as indicators for (chronic) disease and hypothesized that procrastination leads to perceived stress over time, that perceived stress leads to depression and anxiety symptoms over time, and that procrastination leads to depression and anxiety symptoms over time, mediated by perceived stress.

To examine these relationships properly, we collected longitudinal data from 392 university students at three occasions over a one-year period and analyzed the data using autoregressive time-lagged panel models.

Procrastination did lead to depression and anxiety symptoms over time. However, perceived stress was not a mediator of this effect. Procrastination did not lead to perceived stress over time, nor did perceived stress lead to depression and anxiety symptoms over time.

Conclusions

We could not confirm that trait procrastination leads to (chronic) disease via (chronic) stress, as assumed in the procrastination-health model. Nonetheless, our study demonstrated that procrastination can have a detrimental effect on mental health. Further health outcomes and possible mediators should be explored in future studies.

Peer Review reports

Introduction

“Due tomorrow? Do tomorrow.”, might be said by someone who has a tendency to postpone tasks until the last minute. But can we enjoy today knowing about the unfinished task and tomorrow’s deadline? Or do we feel guilty for postponing a task yet again? Do we get stressed out because we have little time left to complete it? Almost everyone has procrastinated at some point when it came to completing unpleasant tasks, such as mowing the lawn, doing the taxes, or preparing for exams. Some tend to procrastinate more frequently and in all areas of life, while others are less inclined to do so. Procrastination is common across a wide range of nationalities, as well as socioeconomic and educational backgrounds [ 1 ]. Over the last fifteen years, there has been a massive increase in research on procrastination [ 2 ]. Oftentimes, research focuses on better understanding the phenomenon of procrastination and finding out why someone procrastinates in order to be able to intervene. Similarly, the internet is filled with self-help guides that promise a way to overcome procrastination. But why do people seek help for their procrastination? Until now, not much research has been conducted on the negative consequences procrastination could have on health and well-being. Therefore, in the following article we examine the effect of procrastination on mental health over time and stress as a possible facilitator of this relationship on the basis of the procrastination-health model by Sirois et al. [ 3 ].

Procrastination and its negative consequences

Procrastination can be defined as the tendency to voluntarily and irrationally delay intended activities despite expecting negative consequences as a result of the delay [ 4 , 5 ]. It has been observed in a variety of groups across the lifespan, such as students, teachers, and workers [ 1 ]. For example, some students tend to regularly delay preparing for exams and writing essays until the last minute, even if this results in time pressure or lower grades. Procrastination must be distinguished from strategic delay [ 4 , 6 ]. Delaying a task is considered strategic when other tasks are more important or when more resources are needed before the task can be completed. While strategic delay is viewed as functional and adaptive, procrastination is classified as dysfunctional. Procrastination is predominantly viewed as the result of a self-regulatory failure [ 7 ]. It can be understood as a trait, that is, as a cross-situational and time-stable behavioral disposition [ 8 ]. Thus, it is assumed that procrastinators chronically delay tasks that they experience as unpleasant or difficult [ 9 ]. Approximately 20 to 30% of adults have been found to procrastinate chronically [ 10 , 11 , 12 ]. Prevalence estimates for students are similar [ 13 ]. It is believed that students do not procrastinate more often than other groups. However, it is easy to examine procrastination in students because working on study tasks requires a high degree of self-organization and time management [ 14 ].

It is generally assumed that procrastination leads to negative consequences [ 4 ]. Negative consequences are even part of the definition of procrastination. Research indicates that procrastination is linked to lower academic performance [ 15 ], health impairment (e.g., stress [ 16 ], physical symptoms [ 17 ], depression and anxiety symptoms [ 18 ]), and poor health-related behavior (e.g., heavier alcohol consumption [ 19 ]). However, most studies targeting consequences of procrastination are cross-sectional [ 4 ]. For that reason, it often remains unclear whether an examined outcome is a consequence or an antecedent of procrastination, or whether a reciprocal relationship between procrastination and the examined outcome can be assumed. Additionally, regarding negative consequences of procrastination on health, it is still largely unknown by which mechanisms they are mediated. Uncovering such mediators would be helpful in developing interventions that can prevent negative health consequences of procrastination.

The procrastination-health model

The first and only model that exclusively focuses on the effect of procrastination on health and the mediators of this effect is the procrastination-health model [ 3 , 9 , 17 ]. Sirois [ 9 ] postulates three pathways: An immediate effect of trait procrastination on (chronic) disease and two mediated pathways (see Fig.  1 ).

figure 1

Adopted from the procrastination-health model by Sirois [ 9 ]

The immediate effect is not further explained. Research suggests that procrastination creates negative feelings, such as shame, guilt, regret, and anger [ 20 , 21 , 22 ]. The described feelings could have a detrimental effect on mental health [ 23 , 24 , 25 ].

The first mediated pathway leads from trait procrastination to (chronic) disease via (chronic) stress. Sirois [ 9 ] assumes that procrastination creates stress because procrastinators are constantly aware of the fact that they still have many tasks to complete. Stress activates the hypothalamic-pituitary-adrenocortical (HPA) system, increases autonomic nervous system arousal, and weakens the immune system, which in turn contributes to the development of diseases. Sirois [ 9 ] distinguishes between short-term and long-term effects of procrastination on health mediated by stress. She believes that, in the short term, single incidents of procrastination cause acute stress, which leads to acute health problems, such as infections or headaches. In the long term, chronic procrastination, as you would expect with trait procrastination, causes chronic stress, which leads to chronic diseases over time. There is some evidence in support of the stress-related pathway, particularly regarding short-term effects [ 3 , 17 , 26 , 27 , 28 ]. However, as we mentioned above, most of these studies are cross-sectional. Therefore, the causal direction of these effects remains unclear. To our knowledge, long-term effects of trait procrastination on (chronic) disease mediated by (chronic) stress have not yet been investigated.

The second mediated pathway leads from trait procrastination to (chronic) disease via poor health-related behavior. According to Sirois [ 9 ], procrastinators form lower intentions to carry out health-promoting behavior or to refrain from health-damaging behavior because they have a low self-efficacy of being able to care for their own health. In addition, they lack the far-sighted view that the effects of health-related behavior only become apparent in the long term. For the same reason, Sirois [ 9 ] believes that there are no short-term, but only long-term effects of procrastination on health mediated by poor health-related behavior. For example, an unhealthy diet leads to diabetes over time. The findings of studies examining the behavioral pathway are inconclusive [ 3 , 17 , 26 , 28 ]. Furthermore, since most of these studies are cross-sectional, they are not suitable for uncovering long-term effects of trait procrastination on (chronic) disease mediated by poor health-related behavior.

In summary, previous research on the two mediated pathways of the procrastination-health model mainly found support for the role of (chronic) stress in the relationship between trait procrastination and (chronic) disease. However, only short-term effects have been investigated so far. Moreover, longitudinal studies are needed to be able to assess the causal direction of the relationship between trait procrastination, (chronic) stress, and (chronic) disease. Consequently, our study is the first to examine long-term effects of trait procrastination on (chronic) disease mediated by (chronic) stress, using a longitudinal design. (Chronic) disease could be measured by a variety of different indicators (e.g., physical symptoms, diabetes, or coronary heart disease). We choose depression and anxiety symptoms as indicators for (chronic) disease because they signal mental health complaints before they manifest as (chronic) diseases. Additionally, depression and anxiety symptoms are two of the most common mental health complaints among students [ 29 , 30 ] and procrastination has been shown to be a significant predictor of depression and anxiety symptoms [ 18 , 31 , 32 , 33 , 34 ]. Until now, the stress-related pathway of the procrastination-health model with depression and anxiety symptoms as the health outcome has only been analyzed in one cross-sectional study that confirmed the predictions of the model [ 35 ].

The aim of our study is to evaluate some of the key assumptions of the procrastination-health model, particularly the relationships between trait procrastination, (chronic) stress, and (chronic) disease over time, surveyed in the following analysis using depression and anxiety symptoms.

In line with the key assumptions of the procrastination-health model, we postulate (see Fig.  2 ):

Procrastination leads to perceived stress over time.

Perceived stress leads to depression and anxiety symptoms over time.

Procrastination leads to depression and anxiety symptoms over time, mediated by perceived stress.

figure 2

The section of the procrastination-health model we examined

Materials and methods

Our study was part of a health monitoring at a large German university Footnote 1 . Ethical approval for our study was granted by the Ethics Committee of the university’s Department of Education and Psychology. We collected the initial data in 2019. Two occasions followed, each at an interval of six months. In January 2019, we sent out 33,267 invitations to student e-mail addresses. Before beginning the survey, students provided their written informed consent to participate in our study. 3,420 students took part at the first occasion (T1; 10% response rate). Of these, 862 participated at the second (T2) and 392 at the third occasion (T3). In order to test whether dropout was selective, we compared sociodemographic and study specific characteristics (age, gender, academic semester, number of assessments/exams) as well as behavior and health-related variables (procrastination, perceived stress, depression and anxiety symptoms) between the participants of the first wave ( n  = 3,420) and those who participated three times ( n  = 392). Results from independent-samples t-tests and chi-square analysis showed no significant differences regarding sociodemographic and study specific characteristics (see Additional file 1: Table S1 and S2 ). Regarding behavior and health-related variables, independent-samples t-tests revealed a significant difference in procrastination between the two groups ( t (3,409) = 2.08, p  < .05). The mean score of procrastination was lower in the group that participated in all three waves.

The mean age of the longitudinal respondents was 24.1 years ( SD  = 5.5 years), the youngest participants were 17 years old, the oldest one was 59 years old. The majority of participants was female (74.0%), 7 participants identified neither as male nor as female (1.8%). The respondents were on average enrolled in the third year of studying ( M  = 3.9; SD  = 2.3). On average, the students worked about 31.2 h ( SD  = 14.1) per week for their studies, and an additional 8.5 h ( SD  = 8.5) for their (part-time) jobs. The average income was €851 ( SD  = 406), and 4.9% of the students had at least one child. The students were mostly enrolled in philosophy and humanities (16.5%), education and psychology (15.8%), biology, chemistry, and pharmacy (12.5%), political and social sciences (10.6%), veterinary medicine (8.9%), and mathematics and computer science (7.7%).

We only used established and well evaluated instruments for our analyses.

  • Procrastination

We adopted the short form of the Procrastination Questionnaire for Students (PFS-4) [ 36 ] to measure procrastination. The PFS-4 assesses procrastination at university as a largely stable behavioral disposition across situations, that is, as a trait. The questionnaire consists of four items (e.g., I put off starting tasks until the last moment.). Each item was rated on a 5-point scale ((almost) never = 1 to (almost) always = 5) for the last two weeks. All items were averaged, with higher scores indicating a greater tendency to procrastinate. The PFS-4 has been proven to be reliable and valid, showing very high correlations with other established trait procrastination scales, for example, with the German short form of the General Procrastination Scale [ 37 , 38 ]. We also proved the scale to be one-dimensional in a factor analysis, with a Cronbach’s alpha of 0.90.

Perceived stress

The Heidelberger Stress Index (HEI-STRESS) [ 39 ] is a three-item measure of current perceived stress due to studying as well as in life in general. For the first item, respondents enter a number between 0 (not stressed at all) and 100 (completely stressed) to indicate how stressed their studies have made them feel over the last four weeks. For the second and third item, respondents rate on a 5-point scale how often they feel “stressed and tense” and as how stressful they would describe their life at the moment. We transformed the second and third item to match the range of the first item before we averaged all items into a single score with higher values indicating greater perceived stress. We proved the scale to be one-dimensional and Cronbach’s alpha for our study was 0.86.

Depression and anxiety symptoms

We used the Patient Health Questionnaire-4 (PHQ-4) [ 40 ], a short form of the Patient Health Questionnaire [ 41 ] with four items, to measure depression and anxiety symptoms. The PHQ-4 contains two items from the Patient Health Questionnaire-2 (PHQ-2) [ 42 ] and the Generalized Anxiety Disorder Scale-2 (GAD-2) [ 43 ], respectively. It is a well-established screening scale designed to assess the core criteria of major depressive disorder (PHQ-2) and generalized anxiety disorder (GAD-2) according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). However, it was shown that the GAD-2 is also appropriate for screening other anxiety disorders. According to Kroenke et al. [ 40 ], the PHQ-4 can be used to assess a person’s symptom burden and impairment. We asked the participants to rate how often they have been bothered over the last two weeks by problems, such as “Little interest or pleasure in doing things”. Response options were 0 = not at all, 1 = several days, 2 = more than half the days, and 3 = nearly every day. Calculated as the sum of the four items, the total scores range from 0 to 12 with higher scores indicating more frequent depression and anxiety symptoms. The total scores can be categorized as none-to-minimal (0–2), mild (3–5), moderate (6–8), and severe (9–12) depression and anxiety symptoms. The PHQ-4 was shown to be reliable and valid [ 40 , 44 , 45 ]. We also proved the scale to be one-dimensional in a factor analysis, with a Cronbach’s alpha of 0.86.

Data analysis

To test our hypotheses, we performed structural equation modelling (SEM) using R (Version 4.1.1) with the package lavaan. All items were standardized ( M  = 0, SD  = 1). Due to the non-normality of some study variables and a sufficiently large sample size of N near to 400 [ 46 ], we used robust maximum likelihood estimation (MLR) for all model estimations. As recommended by Hu and Bentler [ 47 ], we assessed the models’ goodness of fit by chi-square test statistic, root mean square error of approximation (RMSEA), standardized root mean square residual (SRMR), Tucker-Lewis index (TLI), and comparative fit index (CFI). A non-significant chi-square indicates good model fit. Since chi-square is sensitive to sample size, we also evaluated fit indices less sensitive to the number of observations. RMSEA and SRMR values of 0.05 or lower as well as TLI and CFI values of 0.97 or higher indicate good model fit. RMSEA values of 0.08 or lower, SRMR values of 0.10 or lower, as well as TLI and CFI values of 0.95 or higher indicate acceptable model fit [ 48 , 49 ]. First, we conducted confirmatory factor analysis for the first occasion, defining three factors that correspond to the measures of procrastination, perceived stress, and depression and anxiety symptoms. Next, we tested for measurements invariance over time and specified the measurement model, before testing our hypotheses.

Measurement invariance over time

To test for measurement invariance over time, we defined one latent variable for each of the three occasions, corresponding to the measures of procrastination, perceived stress, and depression and anxiety symptoms, respectively. As recommended by Geiser and colleagues [ 50 ], the links between indicators and factors (i.e., factor loadings and intercepts) should be equal over measurement occasions; therefore, we added indicator specific factors. A first and least stringent step of testing measurement invariance is configural invariance (M CI ). It was examined whether the included constructs (procrastination, perceived stress, depression and anxiety symptoms) have the same pattern of free and fixed loadings over time. This means that the assignment of the indicators to the three latent factors over time is supported by the underlying data. If configural invariance was supported, restrictions for the next step of testing measurement invariance (metric or weak invariance; M MI ) were added. This means that each item contributes to the latent construct to a similar degree over time. Metric invariance was tested by constraining the factor loadings of the constructs over time. The next step of testing measurement invariance (scalar or strong invariance; M SI ) consisted of checking whether mean differences in the latent construct capture all mean differences in the shared variance of the items. Scalar invariance was tested by constraining the item intercepts over time. The constraints applied in the metric invariance model were retained [ 51 ]. For the last step of testing measurement invariance (residual or strict invariance; M RI ), the residual variables were also set equal over time. If residual invariance is supported, differences in the observed variables can exclusively be attributed to differences in the variances of the latent variables.

We used the Satorra-Bentler chi-square difference test to evaluate the superiority of a more stringent model [ 52 ]. We assumed the model with the largest number of invariance restrictions – which still has an acceptable fit and no substantial deterioration of the chi-square value – to be the final model [ 53 ]. Following previous recommendations, we considered a decrease in CFI of 0.01 and an increase in RMSEA of 0.015 as unacceptable to establish measurement invariance [ 54 ]. If a more stringent model had a significant worse chi-square value, but the model fit was still acceptable and the deterioration in model fit fell within the change criteria recommended for CFI and RMSEA values, we still considered the more stringent model to be superior.

Hypotheses testing

As recommended by Dormann et al. [ 55 ], we applied autoregressive time-lagged panel models to test our hypotheses. In the first step, we specified a model (M 0 ) that only included the stabilities of the three variables (procrastination, perceived stress, depression and anxiety symptoms) over time. In the next step (M 1 ), we added the time-lagged effects from procrastination (T1) to perceived stress (T2) and from procrastination (T2) to perceived stress (T3) as well as from perceived stress (T1) to depression and anxiety symptoms (T2) and from perceived stress (T2) to depression and anxiety symptoms (T3). Additionally, we included a direct path from procrastination (T1) to depression and anxiety symptoms (T3). If this path becomes significant, we can assume a partial mediation [ 55 ]. Otherwise, we can assume a full mediation. We compared these nested models using the Satorra-Bentler chi-square difference test and the Akaike information criterion (AIC). The chi-square difference value should either be non-significant, indicating that the proposed model including our hypotheses (M 1 ) does not have a significant worse model fit than the model including only stabilities (M 0 ), or, if significant, it should be in the direction that M 1 fits the data better than M 0 . Regarding the AIC, M 1 should have a lower value than M 0 .

Table  1 displays the means, standard deviations, internal consistencies (Cronbach’s alpha), and stabilities (correlations) of all study variables. The alpha values of procrastination, perceived stress, and depression and anxiety symptoms are classified as good (> 0.80) [ 56 ]. The correlation matrix of the manifest variables used for the analyses can be found in the Additional file 1: Table  S3 .

We observed the highest test-retest reliabilities for procrastination ( r  ≥ .74). The test-retest reliabilities for depression and anxiety symptoms ( r  ≥ .64) and for perceived stress ( r  ≥ .54) were a bit lower (see Table  1 ). The pattern of correlations shows a medium to large but positive relationship between procrastination and depression and anxiety symptoms [ 57 , 58 ]. The association between procrastination and perceived stress was small, the one between perceived stress and depression and anxiety symptoms very large (see Table  1 ).

Confirmatory factor analysis showed an acceptable to good fit (x 2 (41) = 118.618, p  < .001; SRMR = 0.042; RMSEA = 0.071; TLI = 0.95; CFI = 0.97). When testing for measurement invariance over time for each construct, the residual invariance models with indicator specific factors provided good fit to the data (M RI ; see Table  2 ), suggesting that differences in the observed variables can exclusively be attributed to differences of the latent variables. We then specified and tested the measurement model of the latent constructs prior to model testing based on the items of procrastination, perceived stress, and depression and anxiety symptoms. The measurement model fitted the data well (M M ; see Table  3 ). All items loaded solidly on their respective factors (0.791 ≤ β ≤ 0.987; p  < .001).

To test our hypotheses, we analyzed the two models described in the methods section.

The fit of the stability model (M 0 ) was acceptable (see Table  3 ). Procrastination was stable over time, with stabilities above 0.82. The stabilities of perceived stress as well as depression and anxiety symptoms were somewhat lower, ranging from 0.559 (T1 -> T2) to 0.696 (T2 -> T3) for perceived stress and from 0.713 (T2 -> T3) to 0.770 (T1 -> T2) for depression and anxiety symptoms, respectively.

The autoregressive mediation model (M 1 ) fitted the data significantly better than M 0 . The direct path from procrastination (T1) to depression and anxiety symptoms (T3) was significant (β = 0.16; p  < .001), however, none of the mediated paths (from procrastination (T1) to perceived stress (T2) and from perceived stress (T2) to depression and anxiety symptoms (T3)) proved to be substantial. Also, the time-lagged paths from perceived stress (T1) to depression and anxiety symptoms (T2) and from procrastination (T2) to perceived stress (T3) were not substantial either (see Fig.  3 ).

To examine whether the hypothesized effects would occur over a one-year period rather than a six-months period, we specified an additional model with paths from procrastination (T1) to perceived stress (T3) and from perceived stress (T1) to depression and anxiety symptoms (T3), also including the stabilities of the three constructs as in the stability model M 0 . The model showed an acceptable fit (χ 2 (486) = 831.281, p  < .001; RMSEA = 0.048; SRMR = 0.091; TLI = 0.95; CFI = 0.95), but neither of the two paths were significant.

Therefore, our hypotheses, that procrastination leads to perceived stress over time (H1) and that perceived stress leads to depression and anxiety symptoms over time (H2) must be rejected. We could only partially confirm our third hypothesis, that procrastination leads to depression and anxiety over time, mediated by perceived stress (H3), since procrastination did lead to depression and anxiety symptoms over time. However, this effect was not mediated by perceived stress.

figure 3

Results of the estimated model including all hypotheses (M 1 ). Note Non-significant paths are dotted. T1 = time 1; T2 = time 2; T3 = time 3. *** p  < .001

To sum up, we tried to examine the harmful consequences of procrastination on students’ stress and mental health. Hence, we selected the procrastination-health model by Sirois [ 9 ] as a theoretical foundation and tried to evaluate some of its key assumptions in a temporal perspective. The author assumes that trait procrastination leads to (chronic) disease via (chronic) stress. We chose depression and anxiety symptoms as indicators for (chronic) disease and postulated, in line with the key assumptions of the procrastination-health model, that procrastination leads to perceived stress over time (H1), that perceived stress leads to depression and anxiety symptoms over time (H2), and that procrastination leads to depression and anxiety symptoms over time, mediated by perceived stress (H3). To examine these relationships properly, we collected longitudinal data from students at three occasions over a one-year period and analyzed the data using autoregressive time-lagged panel models. Our first and second hypotheses had to be rejected: Procrastination did not lead to perceived stress over time, and perceived stress did not lead to depression and anxiety symptoms over time. However, procrastination did lead to depression and anxiety symptoms over time – which is in line with our third hypothesis – but perceived stress was not a mediator of this effect. Therefore, we could only partially confirm our third hypothesis.

Our results contradict previous studies on the stress-related pathway of the procrastination-health model, which consistently found support for the role of (chronic) stress in the relationship between trait procrastination and (chronic) disease. Since most of these studies were cross-sectional, though, the causal direction of these effects remained uncertain. There are two longitudinal studies that confirm the stress-related pathway of the procrastination-health model [ 27 , 28 ], but both studies examined short-term effects (≤ 3 months), whereas we focused on more long-term effects. Therefore, the divergent findings may indicate that there are short-term, but no long-term effects of trait procrastination on (chronic) disease mediated by (chronic) stress.

Our results especially raise the question whether trait procrastination leads to (chronic) stress in the long term. Looking at previous longitudinal studies on the effect of procrastination on stress, the following stands out: At shorter study periods of two weeks [ 27 ] and four weeks [ 28 ], the effect of procrastination on stress appears to be present. At longer study periods of seven weeks [ 59 ], three months [ 28 ], six months, and twelve months, as in our study, the effect of procrastination on stress does not appear to be present. There is one longitudinal study in which procrastination was a significant predictor of stress symptoms nine months later [ 34 ]. The results of this study should be interpreted with caution, though, because the outbreak of the COVID-19 pandemic fell within the study period, which could have contributed to increased stress symptoms [ 60 ]. Unfortunately, Johansson et al. [ 34 ] did not report whether average stress symptoms increased during their study. In one of the two studies conducted by Fincham and May [ 59 ], the COVID-19 pandemic outbreak also fell within their seven-week study period. However, they reported that in their study, average stress symptoms did not increase from baseline to follow-up. Taken together, the findings suggest that procrastination can cause acute stress in the short term, for example during times when many tasks need to be completed, such as at the end of a semester, but that procrastination does not lead to chronic stress over time. It seems possible that students are able to recover during the semester from the stress their procrastination caused at the end of the previous semester. Because of their procrastination, they may also have more time to engage in relaxing activities, which could further mitigate the effect of procrastination on stress. Our conclusions are supported by an early and well-known longitudinal study by Tice and Baumeister [ 61 ], which compared procrastinating and non-procrastinating students with regard to their health. They found that procrastinators experienced less stress than their non-procrastinating peers at the beginning of the semester, but more at the end of the semester. Additionally, our conclusions are in line with an interview study in which university students were asked about the consequences of their procrastination [ 62 ]. The students reported that, due to their procrastination, they experience high levels of stress during periods with heavy workloads (e.g., before deadlines or exams). However, the stress does not last, instead, it is relieved immediately after these periods.

Even though research indicates, in line with the assumptions of the procrastination-health model, that stress is a risk factor for physical and mental disorders [ 63 , 64 , 65 , 66 ], perceived stress did not have a significant effect on depression and anxiety symptoms in our study. The relationship between stress and mental health is complex, as people respond to stress in many different ways. While some develop stress-related mental disorders, others experience mild psychological symptoms or no symptoms at all [ 67 ]. This can be explained with the help of vulnerability-stress models. According to vulnerability-stress models, mental illnesses emerge from an interaction of vulnerabilities (e.g., genetic factors, difficult family backgrounds, or weak coping abilities) and stress (e.g., minor or major life events or daily hassles) [ 68 , 69 ]. The stress perceived by the students in our sample may not be sufficient enough on its own, without the presence of other risk factors, to cause depression and anxiety symptoms. However, since we did not assess individual vulnerability and stress factors in our study, these considerations are mere speculation.

In our study, procrastination led to depression and anxiety symptoms over time, which is consistent with the procrastination-health model as well as previous cross-sectional and longitudinal evidence [ 18 , 21 , 31 , 32 , 33 , 34 ]. However, it is still unclear by which mechanisms this effect is mediated, as perceived stress did not prove to be a substantial mediator in our study. One possible mechanism would be that procrastination impairs affective well-being [ 70 ] and creates negative feelings, such as shame, guilt, regret, and anger [ 20 , 21 , 22 , 62 , 71 ], which in turn could lead to depression and anxiety symptoms [ 23 , 24 , 25 ]. Other potential mediators of the relationship between procrastination and depression and anxiety symptoms emerge from the behavioral pathway of the procrastination-health model, suggesting that poor health-related behaviors mediate the effect of trait procrastination on (chronic) disease. Although evidence for this is still scarce, the results of one cross-sectional study, for example, indicate that poor sleep quality might mediate the effect of procrastination on depression and anxiety symptoms [ 35 ].

In summary, we found that procrastination leads to depression and anxiety symptoms over time and that perceived stress is not a mediator of this effect. We could not show that procrastination leads to perceived stress over time, nor that perceived stress leads to depression and anxiety symptoms over time. For the most part, the relationships between procrastination, perceived stress, and depression and anxiety symptoms did not match the relationships between trait procrastination, (chronic) stress, and (chronic) disease as assumed in the procrastination-health model. Explanations for this could be that procrastination might only lead to perceived stress in the short term, for example, during preparations for end-of-semester exams, and that perceived stress may not be sufficient enough on its own, without the presence of other risk factors, to cause depression and anxiety symptoms. In conclusion, we could not confirm long-term effects of trait procrastination on (chronic) disease mediated by (chronic) stress, as assumed for the stress-related pathway of the procrastination-health model.

Limitations and suggestions for future research

In our study, we tried to draw causal conclusions about the harmful consequences of procrastination on students’ stress and mental health. However, since procrastination is a trait that cannot be manipulated experimentally, we have conducted an observational rather than an experimental study, which makes causal inferences more difficult. Nonetheless, a major strength of our study is that we used a longitudinal design with three waves. This made it possible to draw conclusions about the causal direction of the effects, as in hardly any other study targeting consequences of procrastination on health before [ 4 , 28 , 55 ]. Therefore, we strongly recommend using a similar longitudinal design in future studies on the procrastination-health model or on consequences of procrastination on health in general.

We chose a time lag of six months between each of the three measurement occasions to examine long-term effects of procrastination on depression and anxiety symptoms mediated by perceived stress. However, more than six months may be necessary for the hypothesized effects to occur [ 72 ]. The fact that the temporal stabilities of the examined constructs were moderate or high (0.559 ≤ β ≤ 0.854) [ 73 , 74 ] also suggests that the time lags may have been too short. The larger the time lag, the lower the temporal stabilities, as shown for depression and anxiety symptoms, for example [ 75 ]. High temporal stabilities make it more difficult to detect an effect that actually exists [ 76 ]. Nonetheless, Dormann and Griffin [ 77 ] recommend using shorter time lags of less than one year, even with high stabilities, because of other influential factors, such as unmeasured third variables. Therefore, our time lags of six months seem appropriate.

It should be discussed, though, whether it is possible to detect long-term effects of the stress-related pathway of the procrastination-health model within a total study period of one year. Sirois [ 9 ] distinguishes between short-term and long-term effects of procrastination on health mediated by stress, but does not address how long it might take for long-term effects to occur or when effects can be considered long-term instead of short-term. The fact that an effect of procrastination on stress is evident at shorter study periods of four weeks or less but in most cases not at longer study periods of seven weeks or more, as we mentioned earlier, could indicate that short-term effects occur within the time frame of one to three months, considering the entire stress-related pathway. Hence, it seems appropriate to assume that we have examined rather long-term effects, given our study period of six and twelve months. Nevertheless, it would be beneficial to use varying study periods in future studies, in order to be able to determine when effects can be considered long-term.

Concerning long-term effects of the stress-related pathway, Sirois [ 9 ] assumes that chronic procrastination causes chronic stress, which leads to chronic diseases over time. The term “chronic stress” refers to prolonged stress episodes associated with permanent tension. The instrument we used captures perceived stress over the last four weeks. Even though the perceived stress of the students in our sample was relatively stable (0.559 ≤ β ≤ 0.696), we do not know how much fluctuation occurred between each of the three occasions. However, there is some evidence suggesting that perceived stress is strongly associated with chronic stress [ 78 ]. Thus, it seems acceptable that we used perceived stress as an indicator for chronic stress in our study. For future studies, we still suggest the use of an instrument that can more accurately reflect chronic stress, for example, the Trier Inventory for Chronic Stress (TICS) [ 79 ].

It is also possible that the occasions were inconveniently chosen, as they all took place in a critical academic period near the end of the semester, just before the examination period began. We chose a similar period in the semester for each occasion for the sake of comparability. However, it is possible that, during this preparation periods, stress levels peaked and procrastinators procrastinated less because they had to catch up after delaying their work. This could have introduced bias to the data. Therefore, in future studies, investigation periods should be chosen that are closer to the beginning or in the middle of a semester.

Furthermore, Sirois [ 9 ] did not really explain her understanding of “chronic disease”. However, it seems clear that physical illnesses, such as diabetes or cardiovascular diseases, are meant. Depression and anxiety symptoms, which we chose as indicators for chronic disease, represent mental health complaints that do not have to be at the level of a major depressive disorder or an anxiety disorder, in terms of their quantity, intensity, or duration [ 40 ]. But they can be viewed as precursors to a major depressive disorder or an anxiety disorder. Therefore, given our study period of one year, it seems appropriate to use depression and anxiety symptoms as indicators for chronic disease. At longer study periods, we would expect these mental health complaints to manifest as mental disorders. Moreover, the procrastination-health model was originally designed to be applied to physical diseases [ 3 ]. Perhaps, the model assumptions are more applicable to physical diseases than to mental disorders. By applying parts of the model to mental health complaints, we have taken an important step towards finding out whether the model is applicable to mental disorders as well. Future studies should examine additional long-term health outcomes, both physical and psychological. This would help to determine whether trait procrastination has varying effects on different diseases over time. Furthermore, we suggest including individual vulnerability and stress factors in future studies in order to be able to analyze the effect of (chronic) stress on (chronic) diseases in a more differentiated way.

Regarding our sample, 3,420 students took part at the first occasion, but only 392 participated three times, which results in a dropout rate of 88.5%. At the second and third occasion, invitation e-mails were only sent to participants who had indicated at the previous occasion that they would be willing to participate in a repeat survey and provided their e-mail address. This is probably one of the main reasons for our high dropout rate. Other reasons could be that the students did not receive any incentives for participating in our study and that some may have graduated between the occasions. Selective dropout analysis revealed that the mean score of procrastination was lower in the group that participated in all three waves ( n  = 392) compared to the group that participated in the first wave ( n  = 3,420). One reason for this could be that those who have a higher tendency to procrastinate were more likely to procrastinate on filling out our survey at the second and third occasion. The findings of our dropout analysis should be kept in mind when interpreting our results, as lower levels of procrastination may have eliminated an effect on perceived stress or on depression and anxiety symptoms. Additionally, across all age groups in population-representative samples, the student age group reports having the best subjective health [ 80 ]. Therefore, it is possible that they are more resilient to stress and experience less impairment of well-being than other age groups. Hence, we recommend that future studies focus on other age groups as well.

It is generally assumed that procrastination leads to lower academic performance, health impairment, and poor health-related behavior. However, evidence for negative consequences of procrastination is still limited and it is also unclear by which mechanisms they are mediated. In consequence, the aim of our study was to examine the effect of procrastination on mental health over time and stress as a possible facilitator of this relationship. We selected the procrastination-health model as a theoretical foundation and used the stress-related pathway of the model, assuming that trait procrastination leads to (chronic) disease via (chronic) stress. We chose depression and anxiety symptoms as indicators for (chronic) disease and collected longitudinal data from students at three occasions over a one-year period. This allowed us to draw conclusions about the causal direction of the effects, as in hardly any other study examining consequences of procrastination on (mental) health before. Our results indicate that procrastination leads to depression and anxiety symptoms over time and that perceived stress is not a mediator of this effect. We could not show that procrastination leads to perceived stress over time, nor that perceived stress leads to depression and anxiety symptoms over time. Explanations for this could be that procrastination might only lead to perceived stress in the short term, for example, during preparations for end-of-semester exams, and that perceived stress may not be sufficient on its own, that is, without the presence of other risk factors, to cause depression and anxiety symptoms. Overall, we could not confirm long-term effects of trait procrastination on (chronic) disease mediated by (chronic) stress, as assumed for the stress-related pathway of the procrastination-health model. Our study emphasizes the importance of identifying the consequences procrastination can have on health and well-being and determining by which mechanisms they are mediated. Only then will it be possible to develop interventions that can prevent negative health consequences of procrastination. Further health outcomes and possible mediators should be explored in future studies, using a similar longitudinal design.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

University Health Report at Freie Universität Berlin.

Abbreviations

Comparative fit index

Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition

Generalized Anxiety Disorder Scale-2

Heidelberger Stress Index

Hypothalamic-pituitary-adrenocortical

Robust maximum likelihood estimation

Short form of the Procrastination Questionnaire for Students

Patient Health Questionnaire-2

Patient Health Questionnaire-4

Root mean square error of approximation

Structural equation modeling

Standardized root mean square residual

Tucker-Lewis index

Lu D, He Y, Tan Y, Gender S, Status. Cultural differences, Education, family size and procrastination: a sociodemographic Meta-analysis. Front Psychol. 2021. https://doi.org/10.3389/fpsyg.2021.719425 .

Article   PubMed   PubMed Central   Google Scholar  

Yan B, Zhang X. What research has been conducted on Procrastination? Evidence from a systematical bibliometric analysis. Front Psychol. 2022. https://doi.org/10.3389/fpsyg.2022.809044 .

Sirois FM, Melia-Gordon ML, Pychyl TA. I’ll look after my health, later: an investigation of procrastination and health. Pers Individ Dif. 2003;35:1167–84. https://doi.org/10.1016/S0191-8869(02)00326-4 .

Article   Google Scholar  

Grunschel C. Akademische Prokrastination: Eine qualitative und quantitative Untersuchung von Gründen und Konsequenzen [Unpublished doctoral dissertation]: Universität Bielefeld; 2013.

Steel P. The Nature of Procrastination: a Meta-Analytic and Theoretical Review of Quintessential Self-Regulatory failure. Psychol Bull. 2007;133:65–94. https://doi.org/10.1037/0033-2909.133.1.65 .

Article   PubMed   Google Scholar  

Corkin DM, Yu SL, Lindt SF. Comparing active delay and procrastination from a self-regulated learning perspective. Learn Individ Differ. 2011;21:602–6. https://doi.org/10.1016/j.lindif.2011.07.005 .

Balkis M, Duru E. Procrastination, self-regulation failure, academic life satisfaction, and affective well-being: underregulation or misregulation form. Eur J Psychol Educ. 2016;31:439–59. https://doi.org/10.1007/s10212-015-0266-5 .

Schulz N. Procrastination und Planung – Eine Untersuchung zum Einfluss von Aufschiebeverhalten und Depressivität auf unterschiedliche Planungskompetenzen [Doctoral dissertation]: Westfälische Wilhelms-Universität Münster; 2007.

Sirois FM. Procrastination, stress, and Chronic Health conditions: a temporal perspective. In: Sirois FM, Pychyl TA, editors. Procrastination, Health, and well-being. London: Academic; 2016. pp. 67–92. https://doi.org/10.1016/B978-0-12-802862-9.00004-9 .

Harriott J, Ferrari JR. Prevalence of procrastination among samples of adults. Psychol Rep. 1996;78:611–6. https://doi.org/10.2466/pr0.1996.78.2.611 .

Ferrari JR, O’Callaghan J, Newbegin I. Prevalence of Procrastination in the United States, United Kingdom, and Australia: Arousal and Avoidance delays among adults. N Am J Psychol. 2005;7:1–6.

Google Scholar  

Ferrari JR, Díaz-Morales JF, O’Callaghan J, Díaz K, Argumedo D. Frequent behavioral Delay tendencies by adults. J Cross Cult Psychol. 2007;38:458–64. https://doi.org/10.1177/0022022107302314 .

Day V, Mensink D, O’Sullivan M. Patterns of academic procrastination. JCRL. 2000;30:120–34. https://doi.org/10.1080/10790195.2000.10850090 .

Höcker A, Engberding M, Rist F, Prokrastination. Ein Manual Zur Behandlung Des Pathologischen Aufschiebens. 2nd ed. Göttingen: Hogrefe; 2017.

Kim KR, Seo EH. The relationship between procrastination and academic performance: a meta-analysis. Pers Individ Dif. 2015;82:26–33. https://doi.org/10.1016/j.paid.2015.02.038 .

Khalid A, Zhang Q, Wang W, Ghaffari AS, Pan F. The relationship between procrastination, perceived stress, saliva alpha-amylase level and parenting styles in Chinese first year medical students. Psychol Res Behav Manag. 2019;12:489–98. https://doi.org/10.2147/PRBM.S207430 .

Sirois FM. I’ll look after my health, later: a replication and extension of the procrastination–health model with community-dwelling adults. Pers Individ Dif. 2007;43:15–26. https://doi.org/10.1016/j.paid.2006.11.003 .

Reinecke L, Meier A, Aufenanger S, Beutel ME, Dreier M, Quiring O, et al. Permanently online and permanently procrastinating? The mediating role of internet use for the effects of trait procrastination on psychological health and well-being. New Media Soc. 2018;20:862–80. https://doi.org/10.1177/1461444816675437 .

Westgate EC, Wormington SV, Oleson KC, Lindgren KP. Productive procrastination: academic procrastination style predicts academic and alcohol outcomes. J Appl Soc Psychol. 2017;47:124–35. https://doi.org/10.1111/jasp.12417 .

Feyzi Behnagh R, Ferrari JR. Exploring 40 years on affective correlates to procrastination: a literature review of situational and dispositional types. Curr Psychol. 2022;41:1097–111. https://doi.org/10.1007/s12144-021-02653-z .

Rahimi S, Hall NC, Sticca F. Understanding academic procrastination: a longitudinal analysis of procrastination and emotions in undergraduate and graduate students. Motiv Emot. 2023. https://doi.org/10.1007/s11031-023-10010-9 .

Patrzek J, Grunschel C, Fries S. Academic procrastination: the perspective of University counsellors. Int J Adv Counselling. 2012;34:185–201. https://doi.org/10.1007/s10447-012-9150-z .

Watson D, Clark LA, Carey G. Positive and negative affectivity and their relation to anxiety and depressive disorders. J Abnorm Psychol. 1988;97:346–53. https://doi.org/10.1037//0021-843x.97.3.346 .

Cândea D-M, Szentagotai-Tătar A. Shame-proneness, guilt-proneness and anxiety symptoms: a meta-analysis. J Anxiety Disord. 2018;58:78–106. https://doi.org/10.1016/j.janxdis.2018.07.005 .

Young CM, Neighbors C, DiBello AM, Traylor ZK, Tomkins M. Shame and guilt-proneness as mediators of associations between General Causality orientations and depressive symptoms. J Soc Clin Psychol. 2016;35:357–70. https://doi.org/10.1521/jscp.2016.35.5.357 .

Stead R, Shanahan MJ, Neufeld RW. I’ll go to therapy, eventually: Procrastination, stress and mental health. Pers Individ Dif. 2010;49:175–80. https://doi.org/10.1016/j.paid.2010.03.028 .

Dow NM. Procrastination, stress, and sleep in tertiary students [Master’s thesis]: University of Canterbury; 2018.

Sirois FM, Stride CB, Pychyl TA. Procrastination and health: a longitudinal test of the roles of stress and health behaviours. Br J Health Psychol. 2023. https://doi.org/10.1111/bjhp.12658 .

Hofmann F-H, Sperth M, Holm-Hadulla RM. Psychische Belastungen Und Probleme Studierender. Psychotherapeut. 2017;62:395–402. https://doi.org/10.1007/s00278-017-0224-6 .

Liu CH, Stevens C, Wong SHM, Yasui M, Chen JA. The prevalence and predictors of mental health diagnoses and suicide among U.S. college students: implications for addressing disparities in service use. Depress Anxiety. 2019;36:8–17. https://doi.org/10.1002/da.22830 .

Aftab S, Klibert J, Holtzman N, Qadeer K, Aftab S. Schemas mediate the Link between Procrastination and Depression: results from the United States and Pakistan. J Rat-Emo Cognitive-Behav Ther. 2017;35:329–45. https://doi.org/10.1007/s10942-017-0263-5 .

Flett AL, Haghbin M, Pychyl TA. Procrastination and depression from a cognitive perspective: an exploration of the associations among Procrastinatory Automatic thoughts, rumination, and Mindfulness. J Rat-Emo Cognitive-Behav Ther. 2016;34:169–86. https://doi.org/10.1007/s10942-016-0235-1 .

Saddler CD, Sacks LA. Multidimensional perfectionism and academic procrastination: relationships with Depression in University students. Psychol Rep. 1993;73:863–71. https://doi.org/10.1177/00332941930733pt123 .

Johansson F, Rozental A, Edlund K, Côté P, Sundberg T, Onell C, et al. Associations between procrastination and subsequent Health outcomes among University students in Sweden. JAMA Netw Open. 2023. https://doi.org/10.1001/jamanetworkopen.2022.49346 .

Gusy B, Jochmann A, Lesener T, Wolter C, Blaszcyk W. „Get it done – schadet Aufschieben Der Gesundheit? Präv Gesundheitsf. 2023;18:228–33. https://doi.org/10.1007/s11553-022-00950-4 .

Glöckner-Rist A, Engberding M, Höcker A, Rist F. Prokrastinationsfragebogen für Studierende (PFS): Zusammenstellung sozialwissenschaftlicher items und Skalen. ZIS - GESIS Leibniz Institute for the Social Sciences; 2014.

Klingsieck KB, Fries S. Allgemeine Prokrastination: Entwicklung Und Validierung Einer Deutschsprachigen Kurzskala Der General Procrastination Scale (Lay, 1986). Diagnostica. 2012;58:182–93. https://doi.org/10.1026/0012-1924/a000060 .

Lay CH. At last, my research article on procrastination. J Res Pers. 1986;20:474–95. https://doi.org/10.1016/0092-6566(86)90127-3 .

Schmidt LI, Obergfell J. Zwangsjacke Bachelor?! Stressempfinden Und Gesundheit Studierender: Der Einfluss Von Anforderungen Und Entscheidungsfreiräumen Bei Bachelor- Und Diplomstudierenden Nach Karaseks Demand-Control-Modell. Saarbrücken: VDM Verlag Dr. Müller; 2011.

Kroenke K, Spitzer RL, Williams JB, Löwe B. An Ultra-brief Screening Scale for anxiety and depression: the PHQ-4. Psychosomatics. 2009;50:613–21. https://doi.org/10.1016/S0033-3182(09)70864-3 .

Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD: the PHQ Primary Care Study. JAMA. 1999;282:1737–44. https://doi.org/10.1001/jama.282.18.1737 .

Kroenke K, Spitzer RL, Williams JB. The Patient Health Questionnaire-2: validity of a two-item Depression Screener. Med Care. 2003;41:1284–92.

Kroenke K, Spitzer RL, Williams JB, Monahan PO, Löwe B. Anxiety disorders in Primary Care: prevalence, impairment, Comorbidity, and detection. Ann Intern Med. 2007;146:317–25. https://doi.org/10.7326/0003-4819-146-5-200703060-00004 .

Khubchandani J, Brey R, Kotecki J, Kleinfelder J, Anderson J. The Psychometric properties of PHQ-4 depression and anxiety screening scale among College Students. Arch Psychiatr Nurs. 2016;30:457–62. https://doi.org/10.1016/j.apnu.2016.01.014 .

Löwe B, Wahl I, Rose M, Spitzer C, Glaesmer H, Wingenfeld K, et al. A 4-item measure of depression and anxiety: validation and standardization of the Patient Health Questionnaire-4 (PHQ-4) in the general population. J Affect Disorders. 2010;122:86–95. https://doi.org/10.1016/j.jad.2009.06.019 .

Boomsma A, Hoogland JJ. The robustness of LISREL modeling revisited. In: Cudeck R, Du Toit S, Sörbom D, editors. Structural equation modeling: Present and Future: a festschrift in honor of Karl Jöreskog. Lincolnwood: Scientific Software International; 2001. pp. 139–68.

Hu L, Bentler PM. Fit indices in Covariance structure modeling: sensitivity to Underparameterized Model Misspecification. Psychol Methods. 1998;3:424–53. https://doi.org/10.1037/1082-989X.3.4.424 .

Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: test of significance and descriptive goodness-of-fit measures. MPR. 2003;8:23–74.

Hu L, Bentler PM. Cutoff criteria for fit indexes in Covariance structure analysis: conventional criteria Versus New Alternatives. Struct Equ Model. 1999;6:1–55. https://doi.org/10.1080/10705519909540118 .

Geiser C, Eid M, Nussbeck FW, Courvoisier DS, Cole DA. Analyzing true change in Longitudinal Multitrait-Multimethod studies: application of a Multimethod Change Model to Depression and anxiety in children. Dev Psychol. 2010;46:29–45. https://doi.org/10.1037/a0017888 .

Putnick DL, Bornstein MH. Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Dev Rev. 2016;41:71–90. https://doi.org/10.1016/j.dr.2016.06.004 .

Satorra A, Bentler PM. A scaled difference chi-square test statistic for moment structure analysis. Psychometrika. 2001;66:507–14. https://doi.org/10.1007/BF02296192 .

Geiser C. Datenanalyse Mit Mplus: Eine Anwendungsorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften; 2010.

Book   Google Scholar  

Chen F, Curran PJ, Bollen KA, Kirby J, Paxton P. An empirical evaluation of the use of fixed cutoff points in RMSEA Test Statistic in Structural equation models. Sociol Methods Res. 2008;36:462–94. https://doi.org/10.1177/0049124108314720 .

Dormann C, Zapf D, Perels F. Quer- und Längsschnittstudien in der Arbeitspsychologie [Cross-sectional and longitudinal studies in occupational psychology.]. In: Kleinbeck U, Schmidt K-H,Enzyklopädie der Psychologie [Encyclopedia of psychology]:, Themenbereich D, Serie III, Band 1, Arbeitspsychologie [Subject Area, Series D. III, Volume 1, Industrial Psychology]. Göttingen: Hogrefe Verlag; 2010. pp. 923–1001.

Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994.

Gignac GE, Szodorai ET. Effect size guidelines for individual differences researchers. Pers Indiv Differ. 2016;102:74–8. https://doi.org/10.1016/j.paid.2016.06.069 .

Funder DC, Ozer DJ. Evaluating effect size in Psychological Research: sense and nonsense. Adv Methods Practices Psychol Sci. 2019;2:156–68. https://doi.org/10.1177/2515245919847202 .

Fincham FD, May RW. My stress led me to procrastinate: temporal relations between perceived stress and academic procrastination. Coll Stud J. 2021;55:413–21.

Daniali H, Martinussen M, Flaten MA. A Global Meta-Analysis of Depression, anxiety, and stress before and during COVID-19. Health Psychol. 2023;42:124–38. https://doi.org/10.1037/hea0001259 .

Tice DM, Baumeister RF. Longitudinal study of procrastination, performance, stress, and Health: the costs and benefits of Dawdling. Psychol Sci. 1997;8:454–8. https://doi.org/10.1111/j.1467-9280.1997.tb00460.x .

Schraw G, Wadkins T, Olafson L. Doing the things we do: a grounded theory of academic procrastination. J Educ Psychol. 2007;99:12–25. https://doi.org/10.1037/0022-0663.99.1.12 .

Slavich GM. Life Stress and Health: a review of conceptual issues and recent findings. Teach Psychol. 2016;43:346–55. https://doi.org/10.1177/0098628316662768 .

Phillips AC, Carroll D, Der G. Negative life events and symptoms of depression and anxiety: stress causation and/or stress generation. Anxiety Stress Coping. 2015;28:357–71. https://doi.org/10.1080/10615806.2015.1005078 .

Hammen C. Stress and depression. Annu Rev Clin Psychol. 2005;1:293–319. https://doi.org/10.1146/annurev.clinpsy.1.102803.143938 .

Blazer D, Hughes D, George LK. Stressful life events and the onset of a generalized anxiety syndrome. Am J Psychiatry. 1987;144:1178–83. https://doi.org/10.1176/ajp.144.9.1178 .

Southwick SM, Charney DS. The Science of Resilience: implications for the Prevention and Treatment of Depression. Science. 2012;338:79–82. https://doi.org/10.1126/science.1222942 .

Ingram RE, Luxton DD. Vulnerability-stress models. In: Hankin BL, Abela JR, editors. Development of psychopathology: a vulnerability-stress perspective. Thousand Oaks: Sage; 2005. pp. 32–46.

Chapter   Google Scholar  

Maercker A. Modelle Der Klinischen Psychologie. In: Petermann F, Maercker A, Lutz W, Stangier U, editors. Klinische psychologie – Grundlagen. Göttingen: Hogrefe; 2018. pp. 13–31.

Krause K, Freund AM. Delay or procrastination – a comparison of self-report and behavioral measures of procrastination and their impact on affective well-being. Pers Individ Dif. 2014;63:75–80. https://doi.org/10.1016/j.paid.2014.01.050 .

Grunschel C, Patrzek J, Fries S. Exploring reasons and consequences of academic procrastination: an interview study. Eur J Psychol Educ. 2013;28:841–61. https://doi.org/10.1007/s10212-012-0143-4 .

Dwyer JH. Statistical models for the social and behavioral sciences. New York: Oxford University Press; 1983.

Cohen JA, Power Primer. Psychol Bull. 1992;112:155–9. https://doi.org/10.1037//0033-2909.112.1.155 .

Ferguson CJ. An effect size primer: a Guide for clinicians and Researchers. Prof Psychol Res Pr. 2009;40:532–8. https://doi.org/10.1037/a0015808 .

Hinz A, Berth H, Kittel J, Singer S. Die zeitliche Stabilität (Test-Retest-Reliabilität) Von Angst Und Depressivität Bei Patienten Und in Der Allgemeinbevölkerung. Z Med Psychol. 2011;20:24–31. https://doi.org/10.3233/ZMP-2010-2012 .

Adachi P, Willoughby T. Interpreting effect sizes when controlling for stability effects in longitudinal autoregressive models: implications for psychological science. Eur J Dev Psychol. 2015;12:116–28. https://doi.org/10.1080/17405629.2014.963549 .

Dormann C, Griffin M. Optimal time lags in Panel studies. Psychol Methods. 2015;20:489–505. https://doi.org/10.1037/met0000041 .

Weckesser LJ, Dietz F, Schmidt K, Grass J, Kirschbaum C, Miller R. The psychometric properties and temporal dynamics of subjective stress, retrospectively assessed by different informants and questionnaires, and hair cortisol concentrations. Sci Rep. 2019. https://doi.org/10.1038/s41598-018-37526-2 .

Schulz P, Schlotz W, Becker P. TICS: Trierer Inventar Zum chronischen stress. Göttingen: Hogrefe; 2004.

Heidemann C, Scheidt-Nave C, Beyer A-K, Baumert J, Thamm R, Maier B, et al. Health situation of adults in Germany - results for selected indicators from GEDA 2019/2020-EHIS. J Health Monit. 2021;6:3–25. https://doi.org/10.25646/8459 .

Download references

Acknowledgements

Not applicable.

Open Access Funding provided by Freie Universität Berlin.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Division of Prevention and Psychosocial Health Research, Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195, Berlin, Germany

Anna Jochmann, Burkhard Gusy, Tino Lesener & Christine Wolter

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: A.J., B.G., T.L.; methodology: B.G., A.J.; validation: B.G.; formal analysis: A.J., B.G.; investigation: C.W., T.L., B.G.; data curation: C.W., T.L., B.G.; writing–original draft preparation: A.J., B.G.; writing–review and editing: A.J., T.L., B.G., C.W.; visualization: A.J., B.G.; supervision: B.G., T.L.; project administration: C.W., T.L., B.G.; All authors contributed to the article and approved the submitted version.

Corresponding authors

Correspondence to Anna Jochmann or Burkhard Gusy .

Ethics declarations

Ethics approval and consent to participate.

This study was performed in line with the principles of the Declaration of Helsinki. Ethical approval was obtained from the Ethics Committee of the Department of Education and Psychology, Freie Universität Berlin. All methods were carried out in accordance with relevant guidelines and regulations. The participants provided their written informed consent to participate in this study.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Selective dropout analysis and correlation matrix of the manifest variables

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Jochmann, A., Gusy, B., Lesener, T. et al. Procrastination, depression and anxiety symptoms in university students: a three-wave longitudinal study on the mediating role of perceived stress. BMC Psychol 12 , 276 (2024). https://doi.org/10.1186/s40359-024-01761-2

Download citation

Received : 25 May 2023

Accepted : 02 May 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s40359-024-01761-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Student health
  • Longitudinal study

BMC Psychology

ISSN: 2050-7283

psychology research methods review

IMAGES

  1. Psychology Research Methods, 1st Edition by Lorelle J. Burton

    psychology research methods review

  2. 4.3 Using Theories in Psychological Research

    psychology research methods review

  3. AQA Research Methods Full Revision Notes A Level Psychology

    psychology research methods review

  4. Psychology Research Methods

    psychology research methods review

  5. Research Methods in Psychology

    psychology research methods review

  6. Research Methods in Psychology Fourth Edition Edition

    psychology research methods review

VIDEO

  1. Research Methods in Psychology: Research Presentation

  2. Psychology: research methods-experimental methods #studywithme #study #psychology

  3. PSYCHOLOGY RESEARCH METHODS 9: SECTIONS OF A SCIENTIFIC REPORT

  4. PSY 2120: Why study research methods in psychology?

  5. Features of psychology as a science

  6. Introduction to Psychology

COMMENTS

  1. The Use of Research Methods in Psychological Research: A Systematised Review

    Abstract. Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what ...

  2. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  3. PDF APA Handbook of Research Methods in Psychology

    Chapter 12. Mixed Methods Research in Psychology ..... 235 Timothy C. Guetterman and Analay Perez Chapter 13. The Cases W ithin Trials (CWT) Method: An Example of a Mixed Methods Research Design ..... 257 Daniel B. Fishman Chapter 14. Resear ching With American Indian and Alaska Native Communities:

  4. Frontiers

    Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five ...

  5. Research Methods in Psychology

    Content organized in a clear and logical fashion, and would guide students through a semester-long course on research methods, starting with review content, broad overview of procedures (including limitations), then highlighting less common (though relevant) procedures. ... "Research Methods in Psychology" covers most research method topics ...

  6. Psychological Methods

    Psychological Methods ® is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about ...

  7. APA Handbook of Research Methods in Psychology

    Harris Cooper, PhD, is the Hugo L. Blomquist professor, emeritus, in the Department of Psychology and Neuroscience at Duke University. His research interests concern research synthesis and research methodology, and he also studies the application of social and developmental psychology to education policy.

  8. PDF APA Handbook of Research Methods in Psychology

    III. Title: Handbook of research methods in psychology. BF76.5.A73 2012 150.72 1 dc23 2011045200 British Library Cataloguing-in-Publication Data A CIP record is available from the British Library. Printed in the United States of America First Edition DOI: 10.1037/13619-000

  9. APA Handbook of Research Methods in Psychology

    With significant new and updated content across dozens of chapters, the second edition of the APA Handbook of Research Methods in Psychology presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do. Across three volumes, the chapters in this ...

  10. Research Methods in Clinical Psychology

    of the most important additions or changes in this edition are systematic review methods and literature‐searching methods (see Chapter 3), structured guidelines for appraising the research literature (see Chapters 3 and 8) and for preparing journal ... entitled Research Methods in Clinical Psychology, focused on clinical psychologists as a ...

  11. Research in Psychology: Methods You Should Know

    The Scientific Method in Psychology Research. The steps of the scientific method in psychology research are: Make an observation. Ask a research question and make predictions about what you expect to find. Test your hypothesis and gather data. Examine the results and form conclusions. Report your findings.

  12. (PDF) The Use of Research Methods in Psychological Research: A

    Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 ...

  13. Optimizing Research Output: How Can Psychological Research Methods Be

    Recent evidence suggests that research practices in psychology and many other disciplines are far less effective than previously assumed, which has led to what has been called a "crisis of confidence" in psychological research (e.g., Pashler & Wagenmakers 2012). In response to the perceived crisis, standard research practices have come under intense scrutiny, and various changes have ...

  14. A systematic review and meta-analysis of psychological ...

    Nonetheless, the review used a combination of rigorous methodological approaches used in systematic reviewing and meta-analyses, including the use of quality indicators for individual studies ...

  15. APA handbook of research methods in psychology: Foundations, planning

    With significant new and updated content across dozens of chapters, the second edition of this handbook presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do. The volume 1 of this edition presents the descriptions of many techniques that ...

  16. Nature Reviews Psychology

    In this Review, Albarracín et al. synthesize meta-analyses of individual and social-structural determinants of behaviour and the efficacy of behavioural change interventions that target them ...

  17. Research Methods in Psychology

    There are various types of research methods in psychology with different purposes, strengths, and weaknesses. Research Method. Purpose/Definition. Strength (s) Weaknesses. Experiments 🧪. Manipulates one or more independent variables to determine the effects of certain behavior. (1) can determine cause and effect (2) can be retested and proven.

  18. Introduction to Research Methods in Psychology

    Three Main Types of Research in Psychology. Psychology research can usually be classified as one of three major types. 1. Causal or Experimental Research. When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more ...

  19. Literature review as a research methodology: An ...

    An effective and well-conducted review as a research method creates a firm foundation for advancing knowledge and facilitating theory development (Webster & Watson, 2002). By integrating findings and perspectives from many empirical findings, a literature review can address research questions with a power that no single study has.

  20. Research Methods study and revision guide

    PEER REVIEW (A-level Psychology revision) Peer review is the process by which psychological research papers are subjected to independent scrutiny (close examination) by other psychologists working in a similar field who consider the research in terms of its validity and significance. Such people are generally unpaid. Peer review happens before ...

  21. Home

    Behavior Research Methods is a dedicated outlet for the methodologies, techniques, and tools utilized in experimental psychology research. ... Aims to improve cognitive-psychology research by making it more effective, less error-prone, and easier to run ... We acknowledge with gratitude the following Reviewers who contributed to the peer review ...

  22. Reformative concept analysis for applied psychology qualitative research

    Applied psychology students using RCA for thesis research may be assigned academic supervisors and examiners who are unacquainted with concept analysis methods. These students may be pressed harder for a justification of their choice of concept analysis than they might for a using a less suitable qualitative method that their supervisors are ...

  23. Psychological Review

    Journal scope statement. Psychological Review® publishes articles that make important theoretical contributions to any area of scientific psychology, including systematic evaluation of alternative theories. Papers mainly focused on surveys of the literature, problems of method and design, or reports of empirical findings are not appropriate.

  24. Research Methods

    Overview - Research Methods. Research methods are how psychologists and scientists come up with and test their theories. The A level psychology syllabus covers several different types of studies and experiments used in psychology as well as how these studies are conducted and reported:. Types of psychological studies (including experiments, observations, self-reporting, and case studies)

  25. Procrastination, depression and anxiety symptoms in university students

    The immediate effect is not further explained. Research suggests that procrastination creates negative feelings, such as shame, guilt, regret, and anger [20,21,22].The described feelings could have a detrimental effect on mental health [23,24,25].The first mediated pathway leads from trait procrastination to (chronic) disease via (chronic) stress.