Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

Tips for a qualitative dissertation

Veronika Williams

Veronika Williams

17 October 2017

Tips for students

This blog is part of a series for Evidence-Based Health Care MSc students undertaking their dissertations.

Graphic image of a laptop, mouse, mobile phone, stationery and cup of coffee, viewed from above in primary colours

Undertaking an MSc dissertation in Evidence-Based Health Care (EBHC) may be your first hands-on experience of doing qualitative research. I chatted to Dr. Veronika Williams, an experienced qualitative researcher, and tutor on the EBHC programme, to find out her top tips for producing a high-quality qualitative EBHC thesis.

1) Make the switch from a quantitative to a qualitative mindset

It’s not just about replacing numbers with words. Doing qualitative research requires you to adopt a different way of seeing and interpreting the world around you. Veronika asks her students to reflect on positivist and interpretivist approaches: If you come from a scientific or medical background, positivism is often the unacknowledged status quo. Be open to considering there are alternative ways to generate and understand knowledge.

2) Reflect on your role

Quantitative research strives to produce “clean” data unbiased by the context in which it was generated.  With qualitative methods, this is neither possible nor desirable.  Students should reflect on how their background and personal views shape the way they collect and analyse their data. This will not only add to the transparency of your work but will also help you interpret your findings.

3)  Don’t forget the theory

Qualitative researchers use theories as a lens through which they understand the world around them. Veronika suggests that students consider the theoretical underpinning to their own research at the earliest stages. You can read an article about why theories are useful in qualitative research  here.

4) Think about depth rather than breadth

Qualitative research is all about developing a deep and insightful understanding of the phenomenon/ concept you are studying. Be realistic about what you can achieve given the time constraints of an MSc.  Veronika suggests that collecting and analysing a smaller dataset well is preferable to producing a superficial, rushed analysis of a larger dataset.

5) Blur the boundaries between data collection, analysis and writing up

Veronika strongly recommends keeping a research diary or using memos to jot down your ideas as your research progresses. Not only do these add to your audit trail, these entries will help contribute to your first draft and the process of moving towards theoretical thinking. Qualitative researchers move back and forward between their dataset and manuscript as their ideas develop. This enriches their understanding and allows emerging theories to be explored.

6) Move beyond the descriptive

When analysing interviews, for example, it can be tempting to think that having coded your transcripts you are nearly there. This is not the case!  You need to move beyond the descriptive codes to conceptual themes and theoretical thinking in order to produce a high-quality thesis.  Veronika warns against falling into the pitfall of thinking writing up is, “Two interviews said this whilst three interviewees said that”.

7) It’s not just about the average experience

When analysing your data, consider the outliers or negative cases, for example, those that found the intervention unacceptable.  Although in the minority, these respondents will often provide more meaningful insight into the phenomenon or concept you are trying to study.

8) Bounce ideas

Veronika recommends sharing your emerging ideas and findings with someone else, maybe with a different background or perspective. This isn’t about getting to the “right answer” rather it offers you the chance to refine your thinking.  Be sure, though, to fully acknowledge their contribution in your thesis.

9) Be selective

In can be a challenge to meet the dissertation word limit.  It won’t be possible to present all the themes generated by your dataset so focus! Use quotes from across your dataset that best encapsulate the themes you are presenting.  Display additional data in the appendix.  For example, Veronika suggests illustrating how you moved from your coding framework to your themes.

10) Don’t panic!

There will be a stage during analysis and write up when it seems undoable.  Unlike quantitative researchers who begin analysis with a clear plan, qualitative research is more of a journey. Everything will fall into place by the end.  Be sure, though, to allow yourself enough time to make sense of the rich data qualitative research generates.

Related course:

Qualitative research methods.

Short Course

Logo for the Skills Centre

Dissertations and research projects

  • Book a session
  • Planning your research

Developing a theoretical framework

Reflecting on your position, extended literature reviews, presenting qualitative data.

  • Quantitative research
  • Writing up your research project
  • e-learning and books
  • SkillsCheck This link opens in a new window
  • ⬅ Back to Skills Centre This link opens in a new window
  • Review this resource

What is a theoretical framework?

Developing a theoretical framework for your dissertation is one of the key elements of a qualitative research project. Through writing your literature review, you are likely to have identified either a problem that need ‘fixing’ or a gap that your research may begin to fill.

The theoretical framework is your toolbox . In the toolbox are your handy tools: a set of theories, concepts, ideas and hypotheses that you will use to build a solution to the research problem or gap you have identified.

The methodology is the instruction manual: the procedure and steps you have taken, using your chosen tools, to tackle the research problem.

Why do I need a theoretical framework?

Developing a theoretical framework shows that you have thought critically about the different ways to approach your topic, and that you have made a well-reasoned and evidenced decision about which approach will work best. theoretical frameworks are also necessary for solving complex problems or issues from the literature, showing that you have the skills to think creatively and improvise to answer your research questions. they also allow researchers to establish new theories and approaches, that future research may go on to develop., how do i create a theoretical framework for my dissertation.

First, select your tools. You are likely to need a variety of tools in qualitative research – different theories, models or concepts – to help you tackle different parts of your research question.  

An overview of what to include in a theoretical framework: theories, models, ideologies, concepts, assumptions and perspectives.

When deciding what tools would be best for the job of answering your research questions or problem, explore what existing research in your area has used. You may find that there is a ‘standard toolbox’ for qualitative research in your field that you can borrow from or apply to your own research.

You will need to justify why your chosen tools are best for the job of answering your research questions, at what stage they are most relevant, and how they relate to each other. Some theories or models will neatly fit together and appear in the toolboxes of other researchers. However, you may wish to incorporate a model or idea that is not typical for your research area – the ‘odd one out’ in your toolbox. If this is the case, make sure you justify and account for why it is useful to you, and look for ways that it can be used in partnership with the other tools you are using.

You should also be honest about limitations, or where you need to improvise (for example, if the ‘right’ tool or approach doesn’t exist in your area).

This video from the Skills Centre includes an overview and example of how you might create a theoretical framework for your dissertation:

How do I choose the 'right' approach?

When designing your framework and choosing what to include, it can often be difficult to know if you’ve chosen the ‘right’ approach for your research questions. One way to check this is to look for consistency between your objectives, the literature in your framework, and your overall ethos for the research. This means ensuring that the literature you have used not only contributes to answering your research objectives, but that you also use theories and models that are true to your beliefs as a researcher.

Reflecting on your values and your overall ambition for the project can be a helpful step in making these decisions, as it can help you to fully connect your methodology and methods to your research aims.

Should I reflect on my position as a researcher?

If you feel your position as a researcher has influenced your choice of methods or procedure in any way, the methodology is a good place to reflect on this.  Positionality  acknowledges that no researcher is entirely objective: we are all, to some extent, influenced by prior learning, experiences, knowledge, and personal biases. This is particularly true in qualitative research or practice-based research, where the student is acting as a researcher in their own workplace, where they are otherwise considered a practitioner/professional. It's also important to reflect on your positionality if you belong to the same community as your participants where this is the grounds for their involvement in the research (ie. you are a mature student interviewing other mature learners about their experences in higher education). 

The following questions can help you to reflect on your positionality and gauge whether this is an important section to include in your dissertation (for some people, this section isn’t necessary or relevant):

  • How might my personal history influence how I approach the topic?
  • How am I positioned in relation to this knowledge? Am I being influenced by prior learning or knowledge from outside of this course?
  • How does my gender/social class/ ethnicity/ culture influence my positioning in relation to this topic?
  • Do I share any attributes with my participants? Are we part of a s hared community? How might this have influenced our relationship and my role in interviews/observations?
  • Am I invested in the outcomes on a personal level? Who is this research for and who will feel the benefits?
One option for qualitative projects is to write an extended literature review. This type of project does not require you to collect any new data. Instead, you should focus on synthesising a broad range of literature to offer a new perspective on a research problem or question.  

The main difference between an extended literature review and a dissertation where primary data is collected, is in the presentation of the methodology, results and discussion sections. This is because extended literature reviews do not actively involve participants or primary data collection, so there is no need to outline a procedure for data collection (the methodology) or to present and interpret ‘data’ (in the form of interview transcripts, numerical data, observations etc.) You will have much more freedom to decide which sections of the dissertation should be combined, and whether new chapters or sections should be added.

Here is an overview of a common structure for an extended literature review:

A structure for the extended literature review, showing the results divided into multiple themed chapters.

Introduction

  • Provide background information and context to set the ‘backdrop’ for your project.
  • Explain the value and relevance of your research in this context. Outline what do you hope to contribute with your dissertation.
  • Clarify a specific area of focus.
  • Introduce your research aims (or problem) and objectives.

Literature review

You will need to write a short, overview literature review to introduce the main theories, concepts and key research areas that you will explore in your dissertation. This set of texts – which may be theoretical, research-based, practice-based or policies – form your theoretical framework. In other words, by bringing these texts together in the literature review, you are creating a lens that you can then apply to more focused examples or scenarios in your discussion chapters.

Methodology

As you will not be collecting primary data, your methodology will be quite different from a typical dissertation. You will need to set out the process and procedure you used to find and narrow down your literature. This is also known as a search strategy.

Including your search strategy

A search strategy explains how you have narrowed down your literature to identify key studies and areas of focus. This often takes the form of a search strategy table, included as an appendix at the end of the dissertation. If included, this section takes the place of the traditional 'methodology' section.

If you choose to include a search strategy table, you should also give an overview of your reading process in the main body of the dissertation.  Think of this as a chronology of the practical steps you took and your justification for doing so at each stage, such as:

  • Your key terms, alternatives and synonyms, and any terms that you chose to exclude.
  • Your choice and combination of databases;
  • Your inclusion/exclusion criteria, when they were applied and why. This includes filters such as language of publication, date, and country of origin;
  • You should also explain which terms you combined to form search phrases and your use of Boolean searching (AND, OR, NOT);
  • Your use of citation searching (selecting articles from the bibliography of a chosen journal article to further your search).
  • Your use of any search models, such as PICO and SPIDER to help shape your approach.
  • Search strategy template A simple template for recording your literature searching. This can be included as an appendix to show your search strategy.

The discussion section of an extended literature review is the most flexible in terms of structure. Think of this section as a series of short case studies or ‘windows’ on your research. In this section you will apply the theoretical framework you formed in the literature review – a combination of theories, models and ideas that explain your approach to the topic – to a series of different examples and scenarios. These are usually presented as separate discussion ‘chapters’ in the dissertation, in an order that you feel best fits your argument.

Think about an order for these discussion sections or chapters that helps to tell the story of your research. One common approach is to structure these sections by common themes or concepts that help to draw your sources together. You might also opt for a chronological structure if your dissertation aims to show change or development over time. Another option is to deliberately show where there is a lack of chronology or narrative across your case studies, by ordering them in a fragmentary order! You will be able to reflect upon the structure of these chapters elsewhere in the dissertation, explaining and defending your decision in the methodology and conclusion.

A summary of your key findings – what you have concluded from your research, and how far you have been able to successfully answer your research questions.

  • Recommendations – for improvements to your own study, for future research in the area, and for your field more widely.
  • Emphasise your contributions to knowledge and what you have achieved.

Alternative structure

Depending on your research aims, and whether you are working with a case-study type approach (where each section of the dissertation considers a different example or concept through the lens established in your literature review), you might opt for one of the following structures:

Splitting the literature review across different chapters:

undefined

This structure allows you to pull apart the traditional literature review, introducing it little by little with each of your themed chapters. This approach works well for dissertations that attempt to show change or difference over time, as the relevant literature for that section or period can be introduced gradually to the reader.

Whichever structure you opt for, remember to explain and justify your approach. A marker will be interested in why you decided on your chosen structure, what it allows you to achieve/brings to the project and what alternatives you considered and rejected in the planning process. Here are some example sentence starters:

In qualitative studies, your results are often presented alongside the discussion, as it is difficult to include this data in a meaningful way without explanation and interpretation. In the dsicussion section, aim to structure your work thematically, moving through the key concepts or ideas that have emerged from your qualitative data. Use extracts from your data collection - interviews, focus groups, observations - to illustrate where these themes are most prominent, and refer back to the sources from your literature review to help draw conclusions. 

Here's an example of how your data could be presented in paragraph format in this section:

Example from  'Reporting and discussing your findings ', Monash University .

  • << Previous: Planning your research
  • Next: Quantitative research >>
  • Last Updated: Apr 17, 2024 1:52 PM
  • URL: https://libguides.shu.ac.uk/researchprojects

Sheffield Hallam Library Signifier

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Banner Image

Library Guides

Dissertations 4: methodology: methods.

  • Introduction & Philosophy
  • Methodology

Primary & Secondary Sources, Primary & Secondary Data

When describing your research methods, you can start by stating what kind of secondary and, if applicable, primary sources you used in your research. Explain why you chose such sources, how well they served your research, and identify possible issues encountered using these sources.  

Definitions  

There is some confusion on the use of the terms primary and secondary sources, and primary and secondary data. The confusion is also due to disciplinary differences (Lombard 2010). Whilst you are advised to consult the research methods literature in your field, we can generalise as follows:  

Secondary sources 

Secondary sources normally include the literature (books and articles) with the experts' findings, analysis and discussions on a certain topic (Cottrell, 2014, p123). Secondary sources often interpret primary sources.  

Primary sources 

Primary sources are "first-hand" information such as raw data, statistics, interviews, surveys, law statutes and law cases. Even literary texts, pictures and films can be primary sources if they are the object of research (rather than, for example, documentaries reporting on something else, in which case they would be secondary sources). The distinction between primary and secondary sources sometimes lies on the use you make of them (Cottrell, 2014, p123). 

Primary data 

Primary data are data (primary sources) you directly obtained through your empirical work (Saunders, Lewis and Thornhill 2015, p316). 

Secondary data 

Secondary data are data (primary sources) that were originally collected by someone else (Saunders, Lewis and Thornhill 2015, p316).   

Comparison between primary and secondary data   

Use  

Virtually all research will use secondary sources, at least as background information. 

Often, especially at the postgraduate level, it will also use primary sources - secondary and/or primary data. The engagement with primary sources is generally appreciated, as less reliant on others' interpretations, and closer to 'facts'. 

The use of primary data, as opposed to secondary data, demonstrates the researcher's effort to do empirical work and find evidence to answer her specific research question and fulfill her specific research objectives. Thus, primary data contribute to the originality of the research.    

Ultimately, you should state in this section of the methodology: 

What sources and data you are using and why (how are they going to help you answer the research question and/or test the hypothesis. 

If using primary data, why you employed certain strategies to collect them. 

What the advantages and disadvantages of your strategies to collect the data (also refer to the research in you field and research methods literature). 

Quantitative, Qualitative & Mixed Methods

The methodology chapter should reference your use of quantitative research, qualitative research and/or mixed methods. The following is a description of each along with their advantages and disadvantages. 

Quantitative research 

Quantitative research uses numerical data (quantities) deriving, for example, from experiments, closed questions in surveys, questionnaires, structured interviews or published data sets (Cottrell, 2014, p93). It normally processes and analyses this data using quantitative analysis techniques like tables, graphs and statistics to explore, present and examine relationships and trends within the data (Saunders, Lewis and Thornhill, 2015, p496). 

Qualitative research  

Qualitative research is generally undertaken to study human behaviour and psyche. It uses methods like in-depth case studies, open-ended survey questions, unstructured interviews, focus groups, or unstructured observations (Cottrell, 2014, p93). The nature of the data is subjective, and also the analysis of the researcher involves a degree of subjective interpretation. Subjectivity can be controlled for in the research design, or has to be acknowledged as a feature of the research. Subject-specific books on (qualitative) research methods offer guidance on such research designs.  

Mixed methods 

Mixed-method approaches combine both qualitative and quantitative methods, and therefore combine the strengths of both types of research. Mixed methods have gained popularity in recent years.  

When undertaking mixed-methods research you can collect the qualitative and quantitative data either concurrently or sequentially. If sequentially, you can for example, start with a few semi-structured interviews, providing qualitative insights, and then design a questionnaire to obtain quantitative evidence that your qualitative findings can also apply to a wider population (Specht, 2019, p138). 

Ultimately, your methodology chapter should state: 

Whether you used quantitative research, qualitative research or mixed methods. 

Why you chose such methods (and refer to research method sources). 

Why you rejected other methods. 

How well the method served your research. 

The problems or limitations you encountered. 

Doug Specht, Senior Lecturer at the Westminster School of Media and Communication, explains mixed methods research in the following video:

LinkedIn Learning Video on Academic Research Foundations: Quantitative

The video covers the characteristics of quantitative research, and explains how to approach different parts of the research process, such as creating a solid research question and developing a literature review. He goes over the elements of a study, explains how to collect and analyze data, and shows how to present your data in written and numeric form.

qualitative interview dissertation example

Link to quantitative research video

Some Types of Methods

There are several methods you can use to get primary data. To reiterate, the choice of the methods should depend on your research question/hypothesis. 

Whatever methods you will use, you will need to consider: 

why did you choose one technique over another? What were the advantages and disadvantages of the technique you chose? 

what was the size of your sample? Who made up your sample? How did you select your sample population? Why did you choose that particular sampling strategy?) 

ethical considerations (see also tab...)  

safety considerations  

validity  

feasibility  

recording  

procedure of the research (see box procedural method...).  

Check Stella Cottrell's book  Dissertations and Project Reports: A Step by Step Guide  for some succinct yet comprehensive information on most methods (the following account draws mostly on her work). Check a research methods book in your discipline for more specific guidance.  

Experiments 

Experiments are useful to investigate cause and effect, when the variables can be tightly controlled. They can test a theory or hypothesis in controlled conditions. Experiments do not prove or disprove an hypothesis, instead they support or not support an hypothesis. When using the empirical and inductive method it is not possible to achieve conclusive results. The results may only be valid until falsified by other experiments and observations. 

For more information on Scientific Method, click here . 

Observations 

Observational methods are useful for in-depth analyses of behaviours in people, animals, organisations, events or phenomena. They can test a theory or products in real life or simulated settings. They generally a qualitative research method.  

Questionnaires and surveys 

Questionnaires and surveys are useful to gain opinions, attitudes, preferences, understandings on certain matters. They can provide quantitative data that can be collated systematically; qualitative data, if they include opportunities for open-ended responses; or both qualitative and quantitative elements. 

Interviews  

Interviews are useful to gain rich, qualitative information about individuals' experiences, attitudes or perspectives. With interviews you can follow up immediately on responses for clarification or further details. There are three main types of interviews: structured (following a strict pattern of questions, which expect short answers), semi-structured (following a list of questions, with the opportunity to follow up the answers with improvised questions), and unstructured (following a short list of broad questions, where the respondent can lead more the conversation) (Specht, 2019, p142). 

This short video on qualitative interviews discusses best practices and covers qualitative interview design, preparation and data collection methods. 

Focus groups   

In this case, a group of people (normally, 4-12) is gathered for an interview where the interviewer asks questions to such group of participants. Group interactions and discussions can be highly productive, but the researcher has to beware of the group effect, whereby certain participants and views dominate the interview (Saunders, Lewis and Thornhill 2015, p419). The researcher can try to minimise this by encouraging involvement of all participants and promoting a multiplicity of views. 

This video focuses on strategies for conducting research using focus groups.  

Check out the guidance on online focus groups by Aliaksandr Herasimenka, which is attached at the bottom of this text box. 

Case study 

Case studies are often a convenient way to narrow the focus of your research by studying how a theory or literature fares with regard to a specific person, group, organisation, event or other type of entity or phenomenon you identify. Case studies can be researched using other methods, including those described in this section. Case studies give in-depth insights on the particular reality that has been examined, but may not be representative of what happens in general, they may not be generalisable, and may not be relevant to other contexts. These limitations have to be acknowledged by the researcher.     

Content analysis 

Content analysis consists in the study of words or images within a text. In its broad definition, texts include books, articles, essays, historical documents, speeches, conversations, advertising, interviews, social media posts, films, theatre, paintings or other visuals. Content analysis can be quantitative (e.g. word frequency) or qualitative (e.g. analysing intention and implications of the communication). It can detect propaganda, identify intentions of writers, and can see differences in types of communication (Specht, 2019, p146). Check this page on collecting, cleaning and visualising Twitter data.

Extra links and resources:  

Research Methods  

A clear and comprehensive overview of research methods by Emerald Publishing. It includes: crowdsourcing as a research tool; mixed methods research; case study; discourse analysis; ground theory; repertory grid; ethnographic method and participant observation; interviews; focus group; action research; analysis of qualitative data; survey design; questionnaires; statistics; experiments; empirical research; literature review; secondary data and archival materials; data collection. 

Doing your dissertation during the COVID-19 pandemic  

Resources providing guidance on doing dissertation research during the pandemic: Online research methods; Secondary data sources; Webinars, conferences and podcasts; 

  • Virtual Focus Groups Guidance on managing virtual focus groups

5 Minute Methods Videos

The following are a series of useful videos that introduce research methods in five minutes. These resources have been produced by lecturers and students with the University of Westminster's School of Media and Communication. 

5 Minute Method logo

Case Study Research

Research Ethics

Quantitative Content Analysis 

Sequential Analysis 

Qualitative Content Analysis 

Thematic Analysis 

Social Media Research 

Mixed Method Research 

Procedural Method

In this part, provide an accurate, detailed account of the methods and procedures that were used in the study or the experiment (if applicable!). 

Include specifics about participants, sample, materials, design and methods. 

If the research involves human subjects, then include a detailed description of who and how many participated along with how the participants were selected.  

Describe all materials used for the study, including equipment, written materials and testing instruments. 

Identify the study's design and any variables or controls employed. 

Write out the steps in the order that they were completed. 

Indicate what participants were asked to do, how measurements were taken and any calculations made to raw data collected. 

Specify statistical techniques applied to the data to reach your conclusions. 

Provide evidence that you incorporated rigor into your research. This is the quality of being thorough and accurate and considers the logic behind your research design. 

Highlight any drawbacks that may have limited your ability to conduct your research thoroughly. 

You have to provide details to allow others to replicate the experiment and/or verify the data, to test the validity of the research. 

Bibliography

Cottrell, S. (2014). Dissertations and project reports: a step by step guide. Hampshire, England: Palgrave Macmillan.

Lombard, E. (2010). Primary and secondary sources.  The Journal of Academic Librarianship , 36(3), 250-253

Saunders, M.N.K., Lewis, P. and Thornhill, A. (2015).  Research Methods for Business Students.  New York: Pearson Education. 

Specht, D. (2019).  The Media And Communications Study Skills Student Guide . London: University of Westminster Press.  

  • << Previous: Introduction & Philosophy
  • Next: Ethics >>
  • Last Updated: Sep 14, 2022 12:58 PM
  • URL: https://libguides.westminster.ac.uk/methodology-for-dissertations

CONNECT WITH US

Grad Coach

How To Write The Results/Findings Chapter

For qualitative studies (dissertations & theses).

By: Jenna Crossley (PhD). Expert Reviewed By: Dr. Eunice Rautenbach | August 2021

So, you’ve collected and analysed your qualitative data, and it’s time to write up your results chapter. But where do you start? In this post, we’ll guide you through the qualitative results chapter (also called the findings chapter), step by step. 

Overview: Qualitative Results Chapter

  • What (exactly) the qualitative results chapter is
  • What to include in your results chapter
  • How to write up your results chapter
  • A few tips and tricks to help you along the way
  • Free results chapter template

What exactly is the results chapter?

The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and discuss its meaning), depending on your university’s preference.  We’ll treat the two chapters as separate, as that’s the most common approach.

In contrast to a quantitative results chapter that presents numbers and statistics, a qualitative results chapter presents data primarily in the form of words . But this doesn’t mean that a qualitative study can’t have quantitative elements – you could, for example, present the number of times a theme or topic pops up in your data, depending on the analysis method(s) you adopt.

Adding a quantitative element to your study can add some rigour, which strengthens your results by providing more evidence for your claims. This is particularly common when using qualitative content analysis. Keep in mind though that qualitative research aims to achieve depth, richness and identify nuances , so don’t get tunnel vision by focusing on the numbers. They’re just cream on top in a qualitative analysis.

So, to recap, the results chapter is where you objectively present the findings of your analysis, without interpreting them (you’ll save that for the discussion chapter). With that out the way, let’s take a look at what you should include in your results chapter.

Free template for results section of a dissertation or thesis

What should you include in the results chapter?

As we’ve mentioned, your qualitative results chapter should purely present and describe your results , not interpret them in relation to the existing literature or your research questions . Any speculations or discussion about the implications of your findings should be reserved for your discussion chapter.

In your results chapter, you’ll want to talk about your analysis findings and whether or not they support your hypotheses (if you have any). Naturally, the exact contents of your results chapter will depend on which qualitative analysis method (or methods) you use. For example, if you were to use thematic analysis, you’d detail the themes identified in your analysis, using extracts from the transcripts or text to support your claims.

While you do need to present your analysis findings in some detail, you should avoid dumping large amounts of raw data in this chapter. Instead, focus on presenting the key findings and using a handful of select quotes or text extracts to support each finding . The reams of data and analysis can be relegated to your appendices.

While it’s tempting to include every last detail you found in your qualitative analysis, it is important to make sure that you report only that which is relevant to your research aims, objectives and research questions .  Always keep these three components, as well as your hypotheses (if you have any) front of mind when writing the chapter and use them as a filter to decide what’s relevant and what’s not.

Need a helping hand?

qualitative interview dissertation example

How do I write the results chapter?

Now that we’ve covered the basics, it’s time to look at how to structure your chapter. Broadly speaking, the results chapter needs to contain three core components – the introduction, the body and the concluding summary. Let’s take a look at each of these.

Section 1: Introduction

The first step is to craft a brief introduction to the chapter. This intro is vital as it provides some context for your findings. In your introduction, you should begin by reiterating your problem statement and research questions and highlight the purpose of your research . Make sure that you spell this out for the reader so that the rest of your chapter is well contextualised.

The next step is to briefly outline the structure of your results chapter. In other words, explain what’s included in the chapter and what the reader can expect. In the results chapter, you want to tell a story that is coherent, flows logically, and is easy to follow , so make sure that you plan your structure out well and convey that structure (at a high level), so that your reader is well oriented.

The introduction section shouldn’t be lengthy. Two or three short paragraphs should be more than adequate. It is merely an introduction and overview, not a summary of the chapter.

Pro Tip – To help you structure your chapter, it can be useful to set up an initial draft with (sub)section headings so that you’re able to easily (re)arrange parts of your chapter. This will also help your reader to follow your results and give your chapter some coherence.  Be sure to use level-based heading styles (e.g. Heading 1, 2, 3 styles) to help the reader differentiate between levels visually. You can find these options in Word (example below).

Heading styles in the results chapter

Section 2: Body

Before we get started on what to include in the body of your chapter, it’s vital to remember that a results section should be completely objective and descriptive, not interpretive . So, be careful not to use words such as, “suggests” or “implies”, as these usually accompany some form of interpretation – that’s reserved for your discussion chapter.

The structure of your body section is very important , so make sure that you plan it out well. When planning out your qualitative results chapter, create sections and subsections so that you can maintain the flow of the story you’re trying to tell. Be sure to systematically and consistently describe each portion of results. Try to adopt a standardised structure for each portion so that you achieve a high level of consistency throughout the chapter.

For qualitative studies, results chapters tend to be structured according to themes , which makes it easier for readers to follow. However, keep in mind that not all results chapters have to be structured in this manner. For example, if you’re conducting a longitudinal study, you may want to structure your chapter chronologically. Similarly, you might structure this chapter based on your theoretical framework . The exact structure of your chapter will depend on the nature of your study , especially your research questions.

As you work through the body of your chapter, make sure that you use quotes to substantiate every one of your claims . You can present these quotes in italics to differentiate them from your own words. A general rule of thumb is to use at least two pieces of evidence per claim, and these should be linked directly to your data. Also, remember that you need to include all relevant results , not just the ones that support your assumptions or initial leanings.

In addition to including quotes, you can also link your claims to the data by using appendices , which you should reference throughout your text. When you reference, make sure that you include both the name/number of the appendix , as well as the line(s) from which you drew your data.

As referencing styles can vary greatly, be sure to look up the appendix referencing conventions of your university’s prescribed style (e.g. APA , Harvard, etc) and keep this consistent throughout your chapter.

Section 3: Concluding summary

The concluding summary is very important because it summarises your key findings and lays the foundation for the discussion chapter . Keep in mind that some readers may skip directly to this section (from the introduction section), so make sure that it can be read and understood well in isolation.

In this section, you need to remind the reader of the key findings. That is, the results that directly relate to your research questions and that you will build upon in your discussion chapter. Remember, your reader has digested a lot of information in this chapter, so you need to use this section to remind them of the most important takeaways.

Importantly, the concluding summary should not present any new information and should only describe what you’ve already presented in your chapter. Keep it concise – you’re not summarising the whole chapter, just the essentials.

Tips for writing an A-grade results chapter

Now that you’ve got a clear picture of what the qualitative results chapter is all about, here are some quick tips and reminders to help you craft a high-quality chapter:

  • Your results chapter should be written in the past tense . You’ve done the work already, so you want to tell the reader what you found , not what you are currently finding .
  • Make sure that you review your work multiple times and check that every claim is adequately backed up by evidence . Aim for at least two examples per claim, and make use of an appendix to reference these.
  • When writing up your results, make sure that you stick to only what is relevant . Don’t waste time on data that are not relevant to your research objectives and research questions.
  • Use headings and subheadings to create an intuitive, easy to follow piece of writing. Make use of Microsoft Word’s “heading styles” and be sure to use them consistently.
  • When referring to numerical data, tables and figures can provide a useful visual aid. When using these, make sure that they can be read and understood independent of your body text (i.e. that they can stand-alone). To this end, use clear, concise labels for each of your tables or figures and make use of colours to code indicate differences or hierarchy.
  • Similarly, when you’re writing up your chapter, it can be useful to highlight topics and themes in different colours . This can help you to differentiate between your data if you get a bit overwhelmed and will also help you to ensure that your results flow logically and coherently.

If you have any questions, leave a comment below and we’ll do our best to help. If you’d like 1-on-1 help with your results chapter (or any chapter of your dissertation or thesis), check out our private dissertation coaching service here or book a free initial consultation to discuss how we can help you.

qualitative interview dissertation example

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Quantitative results chapter in a dissertation

20 Comments

David Person

This was extremely helpful. Thanks a lot guys

Aditi

Hi, thanks for the great research support platform created by the gradcoach team!

I wanted to ask- While “suggests” or “implies” are interpretive terms, what terms could we use for the results chapter? Could you share some examples of descriptive terms?

TcherEva

I think that instead of saying, ‘The data suggested, or The data implied,’ you can say, ‘The Data showed or revealed, or illustrated or outlined’…If interview data, you may say Jane Doe illuminated or elaborated, or Jane Doe described… or Jane Doe expressed or stated.

Llala Phoshoko

I found this article very useful. Thank you very much for the outstanding work you are doing.

Oliwia

What if i have 3 different interviewees answering the same interview questions? Should i then present the results in form of the table with the division on the 3 perspectives or rather give a results in form of the text and highlight who said what?

Rea

I think this tabular representation of results is a great idea. I am doing it too along with the text. Thanks

Nomonde Mteto

That was helpful was struggling to separate the discussion from the findings

Esther Peter.

this was very useful, Thank you.

tendayi

Very helpful, I am confident to write my results chapter now.

Sha

It is so helpful! It is a good job. Thank you very much!

Nabil

Very useful, well explained. Many thanks.

Agnes Ngatuni

Hello, I appreciate the way you provided a supportive comments about qualitative results presenting tips

Carol Ch

I loved this! It explains everything needed, and it has helped me better organize my thoughts. What words should I not use while writing my results section, other than subjective ones.

Hend

Thanks a lot, it is really helpful

Anna milanga

Thank you so much dear, i really appropriate your nice explanations about this.

Wid

Thank you so much for this! I was wondering if anyone could help with how to prproperly integrate quotations (Excerpts) from interviews in the finding chapter in a qualitative research. Please GradCoach, address this issue and provide examples.

nk

what if I’m not doing any interviews myself and all the information is coming from case studies that have already done the research.

FAITH NHARARA

Very helpful thank you.

Philip

This was very helpful as I was wondering how to structure this part of my dissertation, to include the quotes… Thanks for this explanation

Aleks

This is very helpful, thanks! I am required to write up my results chapters with the discussion in each of them – any tips and tricks for this strategy?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

University of Leeds logo

  • Study and research support
  • Academic skills

Dissertation examples

Listed below are some of the best examples of research projects and dissertations from undergraduate and taught postgraduate students at the University of Leeds We have not been able to gather examples from all schools. The module requirements for research projects may have changed since these examples were written. Refer to your module guidelines to make sure that you address all of the current assessment criteria. Some of the examples below are only available to access on campus.

  • Undergraduate examples
  • Taught Masters examples
  • Open access
  • Published: 05 May 2024

A qualitative interview study to determine barriers and facilitators of implementing automated decision support tools for genomic data access

  • Vasiliki Rahimzadeh 1 ,
  • Jinyoung Baek 2 ,
  • Jonathan Lawson 2 &
  • Edward S. Dove 3  

BMC Medical Ethics volume  25 , Article number:  51 ( 2024 ) Cite this article

173 Accesses

1 Altmetric

Metrics details

Data access committees (DAC) gatekeep access to secured genomic and related health datasets yet are challenged to keep pace with the rising volume and complexity of data generation. Automated decision support (ADS) systems have been shown to support consistency, compliance, and coordination of data access review decisions. However, we lack understanding of how DAC members perceive the value add of ADS, if any, on the quality and effectiveness of their reviews. In this qualitative study, we report findings from 13 semi-structured interviews with DAC members from around the world to identify relevant barriers and facilitators to implementing ADS for genomic data access management. Participants generally supported pilot studies that test ADS performance, for example in cataloging data types, verifying user credentials and tagging datasets for use terms. Concerns related to over-automation, lack of human oversight, low prioritization, and misalignment with institutional missions tempered enthusiasm for ADS among the DAC members we engaged. Tensions for change in institutional settings within which DACs operated was a powerful motivator for why DAC members considered the implementation of ADS into their access workflows, as well as perceptions of the relative advantage of ADS over the status quo. Future research is needed to build the evidence base around the comparative effectiveness and decisional outcomes of institutions that do/not use ADS into their workflows.

Peer Review reports

Introduction

Genomics is among the most data-prolific scientific fields and is expected to surpass the storage needs and analytic capacities of Twitter, YouTube, and astronomy combined by as soon as 2025 [ 1 ]. To meet rising demands for genomic data and their efficient collection and use, national genomics initiatives [ 2 ] rely on largescale repositories to pool data resources and incentivize data sharing [ 3 , 4 , 5 ]. The “data commons” model has since become the flagship approach for many of these initiatives [ 6 ], and prioritizes research collaboration and data access over proprietary exclusion in the data [ 3 ]. Data access committees (DACs) are principally charged with ensuring only bona fide researchers conducting research permitted by participants’ informed consent are approved to access the data [ 7 ]. DACs are typically staffed by research compliance officers, researchers, and sometimes data security professionals. DAC members can be paid or serve as volunteers and, at a basic level, arbitrate access to data given requests meet minimum requirements for data protection and compliance. Critiques of compliance-only responsibilities and the growing appreciation of data privacy risks among the general public has raised questions about whether DACs ought to weigh in on issues of social and scientific value of the data projects [ 8 ]. Our prior empirical work [ 9 ] suggests there is debate around this scope of DAC oversight, particularly as it relates to considerations of data ethics that are traditionally the domain of institutional ethics committees.

Cheah and Piasecki, for example, propose that DACs have responsibilities to both promote data sharing and protect the interests of individuals and communities about whom the shared data relate: “data access should be granted as long as the data reuse fulfils the criterion of having even a minimal social value, and minimal risk to data subjects and their communities” [ 7 ]. In this way, DACs anchor responsible data sharing ecosystems since they govern access to and compliant use of genomic and, increasingly, other health data [ 10 , 11 , 12 ].

However, DACs may not contribute to efficient data access provisions as effectively as other review models may allow [ 13 ]. In the standard model of data access review, DACs manually review a data requester’s application and assess it against pre-defined criteria. Criteria may include appropriateness of the data requested, data use terms set by data providers, and data privacy and security requirements set by the institution and by law [ 7 ]. As with most, if not all, human-mediated activities, manual review of these criteria can be a laborious and error-prone process. For example, DACs may interpret language describing permitted data uses differently, and the terms themselves can sometimes be ambiguous [ 14 ]. Faced with this ambiguity, DACs are forced to make subjective judgments about whether requests for data access truly align with permitted data uses, if these permissions have been preserved at all. Inconsistencies in how data use terms are articulated in consent forms and subsequently interpreted and executed by DACs across the biomedical ecosystem [ 14 ] can lead to delayed and inconsistent data access decisions, and risk violating the terms by which patients or participants contributed their data in the first place.

Other steps in the data access pipeline can also contribute to research delays. Emerging research suggests there is growing inefficiency, inconsistency, and error in the manual, entirely human-mediated review of data access agreements [ 13 , 15 ] which are executed in finalizing approved data access requests. Many researchers furthermore still rely on the traditional method of copying-and-downloading data once approved. The copy-download approach multiplies security risks [ 11 ], and is quickly becoming unreasonable given the expanding size and complexities of genomic datasets [ 16 , 17 ].

Standards’ developers and software engineers have therefore sought to semi-automate three axes of data access control within cloud environments – user authentication, review of access requests, and concordance of the proposed research with the data use terms of the data requested [ 14 ]. Automated decision support (ADS) systems are a coordinated system of algorithms, software, and ontologies [ 18 ] that aid in categorizing, archiving, and/or acting on decision tasks for data access review. The Data Use Oversight System (DUOS) typifies one such automated decision support [ 19 ]. In recent beta tests, DUOS was successfully shown to concur 100% of the time with human-decided access requests [ 15 ], and also codifies 93% of genomic datasets in NIH’s dbGaP [ 20 ].

While ADS can supplement human DACs with semi-automated technical solutions, no systematic investigation has sought to characterize relevant barriers and facilitators to ADS in practice [ 21 ]. Moreover, we lack understanding of how DAC members perceive the value added by ADS, if any, on the quality and effectiveness of data access review decisions, as well as what challenges they anticipate in adopting ADS considering the myriad organizational structures within which DACs operate.

Now is an opportune time to study the implementation barriers and facilitators to using ADS solutions for data access as their development converges with large-scale data migration to the cloud that can result in near-instant data access decisions. The genomics community can learn important lessons from previous attempts at (premature) ADS implementation without purposeful stakeholder engagement in public health [ 22 ], law enforcement [ 23 ] and in clinical care [ 24 ]. In this article, we report empirical findings on the “constellation of processes” relevant for implementing ADS for genomic data access management and provide practical recommendations for institutional data stewards that are considering or have already implemented ADS in this context.

We conducted a qualitative description study that engaged prospective end users of ADS for genomic data governance to explore: What are the barriers and opportunities of implementing automated workflows to manage access requests to genomic data collections, and what effect do ADS have on DAC review quality and effectiveness? We adopted Damshroder and colleagues’ definition of implementation as the “critical gateway between an organizational decision to adopt an intervention and the routine use of that intervention” [ 25 ] in order to “study the constellation of processes intended to get an intervention into use within an organization” [ 25 ]. We applied the Consolidated Framework for Implementation Research (CFIR) to compare genomic data access processes and procedures to better understand implementation processes for automated workflows to manage genomic data access across international, publicly funded genomic data repositories. The CFIR provides a “menu of constructs” associated with five domains of effective implementation which have been rigorously meta-theorized—that is, synthesized from many implementation theories (Fig.  1 ). In addition, the CFIR provides a practical guide to systematically assess potential barriers and facilitators ahead of an innovation’s implementation (L. Damschroder et al. 2015). The CFIR is also easily customizable to unveiling bioethical issues during implementation in genomics and has been applied in prior work (Burke and Korngiebel, 2015; Smit et al., 2020).

figure 1

Adapted Consolidated Framework for Implementation Research (CFIR) and associated domains (Intervention Characteristics, Individuals, Process, Inner Setting, Outer Setting) used to structure 13 qualitative interviews on the relevant factors mediating implementation of automated decision support tools for genomic data access management and sharing among publicly funded genomic data repositories worldwide

The interview guide was developed specifically for this study and is available in Supplementary Materials 2 .

Data collection

We conducted a total of 13 semi-structured interviews with 17 DAC members between 27 April and 24 August 2022. Prospective interviewees indicated their interest in being invited to a follow up interview following their participation in a previous survey published elsewhere [ 9 ]. All interviews were conducted virtually and audio/video recorded on Zoom. We used validated interview guides from the official CFIR instrument repository ( https://cfirguide.org/evaluation-design/qualitative-data/ ) to probe the barriers and opportunities of implementing ADS solutions for DAC review of data access requests. Interviews lasted between 45 and 60 min and included 29 questions adapted from the CFIR instrument to fit the ADS context (e.g. Inner Setting, Outer Setting, Intervention Characteristics etc.). The specific interview guide used is available in Supplementary Materials 2 . Interviewees were also recruited from the Data Access Committee Review Standards Working Group (DACReS WG) chaired by authors VR, JL, and ESD, as well as from an internet search of publicly funded genomic data repositories worldwide.

Data analysis

We first applied a deductive coding frame to the interview transcripts based on a framework analysis approach (Pope, Ziebland, and Mays 2000) and the publicly accessible CFIR codebook available in the Supplemental Materials 1 . To ensure the reliability of conclusions drawn, two independent reviewers (VR and JB) tested the coding schema on three transcripts until reaching a recommended interrater reliability score of 0.83 before analyzing the remaining qualitative dataset. All coding discrepancies during the coding pilot were resolved by consensus discussion.

Geographical, Institutional, and Demographic Background of Participants

41% of interviewees worked within U.S.-based DACs, while the remaining 59% of interviewees represented DACs at institutions in Canada, the U.K., Spain, Tunisia, Australia, and Japan (Table  1 ). Nearly 60% of interviewees worked at a non-profit research institute, 24% represented an academic-affiliated research institution, 12% represented a government research agency, and 6% were affiliated with a research consortium. 76% of interview participants identified as female, and 24% as male.

Opportunities for ADS

We categorized the frequency of CFIR implementation factors referenced in our interviews in Table  2 . Our findings suggest that there are three major facilitators to implementing ADS for genomic data governance: (1) external policy and need for efficient workflows, (2) institutional ability to scale the ADS, and (3) interoperability.

External policy and need for efficient workflows

Participants considered adopting ADS to comply with new data sharing mandates from research funders (e.g. National Institutes of Health) and those imposed by peer reviewed journals. The demand for and scope of compliant data access review has had a ripple effect on ethics oversight bodies [ 26 ], including DACs, as a result of these new requirements [ 9 ]. Most DAC members we engaged with currently perform their reviews manually. Members review all data access requests individually or as a committee and make decisions on each request received in the order they were received. Given the anticipated increase in the number of data access requests [ 27 ], our participants noted the reduced workload and costs associated with ADS could contribute to better review efficiencies, without a concomitant loss in review quality and risk of noncompliance with data use conditions.

We found that participants perceived that ADS could reduce DAC member workload by streamlining the intake process for data access requests and verifying that the request matched the terms of use in the original consent obtained at data collection. Indeed, participants noted the initial screening of Data Access Requests (DARs) was a common rate-limiting step in the submission to decision process. DACs often begin the review process by verifying that all necessary information is documented in the request (e.g. study purpose, datasets requested, ethics review). This step can be time-consuming because the requirements can vary depending on the researcher’s institution and the datasets they request. We requested that participants share a copy of their DAR form before, during, or after the interview to compare what information DACs typically required to process a DAR. We found the form fields as well as length of the DAR (from 3 to 18 pages) differed considerably. Our participants believed that this is where ADS could be useful by automatically flagging missing information and documents, verifying the authenticity of a requester’s identity and the submitted documents, and then sending notifications to requesters if more information is needed. As one interviewee put it:

Because one of the biggest concerns in our DAC is that sometimes it takes too much time to be read by all the nine members. … They’re institutional directors or university professors. So I think it will help. Maybe if you have 50% of the work done by an automated system, so you just have to do the 50%. I think … this will be a good motivation for them saying ‘OK’ [to implement ADS].  ‑ Participant M.

Scalability and cost effectiveness

Participants also believed ADS-enabled workflows could be scalable, cost-effective solutions to management of not just newly generated data, but also for legacy data when grant funding ends because ADS can easily store and quickly present data use conditions and audit past DAC reviews. Two interviewees discussed the challenges of finding cost effective solutions to managing legacy datasets:

Actually there are lots of costs related to data sharing, particularly if I’m sharing data from the 1990s, for example. I don’t have any money or budget anymore to prepare the data [for secondary uses]. … And similarly, when it comes to these reports [on data sharing activities], there’s no extra money for doing the work to create those reports. But we’re having to report back over assets from years, decades in fact. And there was always just a little bit of a hint ‘oh well, maybe we’ll find some money’. No, no, you have to find it out on your own.  ‑ Participant F. I mean potentially as we grow over the years, you know what’s going to happen. … we’ve also discussed some scenarios, where, for example, we find ourselves with a larger amount of requests coming in, [and] we only accept applications up to certain days and then, we open this next quarter, close it again. But there potentially could be room for automation depending on the increase in request in the coming years.  ‑ Participant A.

Retention and sustainability of human resources

Participants also discussed retention of repository staff and DAC membership as an evolving human resource factor that would motivate ADS adoption. For example, some participants shared that ADS could be helpful when DAC members or data generators leave the institution, disrupting review continuity and consistency. Unlike for large, well-funded government repositories, many DACs at smaller institutions lack human resources to ensure long-term data preservation and access management for data of increasing complexity and volume:

As the program scales, the participant diversity scales, the data diversity scales. I think it is almost impossible to see a scenario where we do not rely on some level of automation to support human decision making about what is responsible use.  ‑ Participant J.

Interoperability

According to the DAC members we interviewed, ADS tools could provide centralized, interoperable solutions to facilitate inter-organizational and international data sharing. Participants perceived that ADS could motivate use of standardized request forms, access agreements, dataset identifiers, and methods for verifying researcher identities. For example, one participant commented:

But this [ADS] will free up a lot of time in the process is it also potentially means that it will become easier for, if you’re working in a team to hand off tasks as well because you will have a single system. … Also, consistency between organizations. If we have multiple organizations take this up, it’s going to mean less lead time. [Let’s] say people take a new job in a new place. We’ll actually have some software that people will recognize and be able to use and uptake, which we’ve been trying to go towards without ethics approval processes within the hospital and health services… [standardized] systems makes it easier for actual communication between organizations on processes, because everyone kind of begins to know what’s happening.  ‑ Participant E.

(b) Barriers to implementing ADS .

Despite clear advantages of ADS for genomic data access management, our interviewees identified significant barriers to implementation within DAC workflows, including: (1) lower priority compared to more immediate governance challenges, (2) ill equipped personnel and structures within the institution, (3) costs, and (4) degree of human oversight.

Prioritization

Many participants reported that institutional leadership prioritized other competing research data needs over investing in new data governance structures (e.g. generating quality data, increasing diversity in datasets, collaborating with underrepresented groups of researchers and participants, and releasing datasets). Participants believed researchers in general understand why quality and effective review of data access is important for responsible genomic data sharing but are firstly concerned with data quality. Another suspected reason that ADS implementation ranked lower on institutional priorities was that there had not yet been a significant data incident. As one participant put it:

I don’t think that the program thinks it is a very high priority to streamline any of the [data access oversight] process. I think that it will either take something bad happening and then realizing that we need additional capacities on [DAC], or some other hiccup to really promote that need.  ‑ Participant O.

Because budgets for data governance are not always included in grants, researchers may be less motivated to invest in the additional, largely unpaid work related to data governance. Insufficient resourcing for data sharing and governance mechanisms prospectively in research study design inevitably challenge the downstream execution of data governance upon deposit of the research data once generated, according to at least one DAC member we interviewed:

We found that some people don’t prioritize [data governance] because it’s not helpful to them, because it’s not our primary function as a department. You know, we’re producing new data. That’s usually what people, researchers are doing. They’re not thinking about what happens to their old data. So, it’s not much of a priority. Having said that, research funders are getting very keen for us to use their data. So, there is that sort of tug [of war]. … If I go into a senior team meeting, you know, something else will be the priority.  ‑ Participant F.

Structural characteristics of an organization

We also found a close correlation between several structural characteristics of the institution (e.g. years in operation, number of personnel, and database size) and participants’ perceived barriers to ADS implementation. For instance, many participants served on DACs that were established within the last 1–3 years coinciding with the creation of the institution’s database. As the datasets grow, and more researchers are attracted to the resource, there is greater potential to overwhelm existing management processes. It is precisely at this early juncture that DACs would benefit from weighing their ADS options, and proactively address relevant barriers ahead of any plans for implementation. Some DAC members preferred to gain more experience with existing data access management in these early years of data release before integrating ADS “because we’re not sure how [name of participant’s country] citizens feel or consider about the automatic decision on data sharing.” Participant K.

While cost was not a primary concern for ADS implementation at well-funded big data repositories, it was a significant barrier for DAC members working at smaller repositories, individual research departments, or research programs associated with a genomics consortium who were more often supported by research grants or contracts rather than an independent funding source.

“We [data governance office] are supported through project-specific funding. … Governance ends up being a little bit of this indirectly supported component of our work and services. That has limited the ways in which we can innovate around governance. … We don’t have a huge budget.”  ‑ Participant N.

Without dedicated budget for human and material resources, some DAC members were concerned that the initial investment in ADS and significant changes to current workflows would be key issues, to say nothing of new education and training materials and updates to internal policies, among other ancillary revisions to internal workflows.

Lack of human oversight

While some DAC members were enthusiastic about improvements in efficiency and consistency of ADS, participants unanimously rejected the idea of fully automating access management: “no matter what we do with automation that I feel there always needs to be that human element who’s coming in and checking. So, there will always be that barrier to upscaling” Participant E. Other participants emphasized that prior to implementation, they would need to gauge how research participants at their own institution as well as the general public would react to ADS for data access review.

Participants were also skeptical that ADS could adequately assess complex, sensitive data reuse issues which they felt required a deep understanding of ethical, legal, and sociocultural contexts within which data were collected, used, and shared. Some DAC members reported asking data requesters to clarify their study purpose and justify their need for specific datasets in recognition of these sociocultural dimensions.

I’m also someone who thinks that it’s important to be very critical about what’s the nature of the work being done. Maybe it’s solid from a scientific point of view. But are there other concerns from other perspectives that need to be taken into account? That is partly why we have community members on the [committee], and that’s something I’m not sure can be simplified or automated.”

However, when it comes to automating anything that requires reviewing information where there might be a lot of nuances, where there might be a lot of interpretation that’s required, I’m a little bit more hesitant simply because I think to some extent you do need some room for a little bit of mulling over the information, … and I think there are some information that come through with requests, that don’t neatly fit into check boxes.  ‑ Participant B.

Overall, participants perceived that ADS tools could be well positioned to help DACs streamline data access compliance. While believed to beneficial, ADS solutions were unlikely to immediately or directly advance the research organization’s core mission (e.g. collecting quality data and driving scientific discoveries and innovations). One of the most challenging barriers to implementation is the relative low priority of, and lack of institutional investment in, data infrastructures that could adapt as the dynamics of genomic data generation and storage change over time. Participants tended to regard ADS implementation, as well as data governance workflow solutions, as a lower priority compared to regulatory compliance, investigator support, and database curation, among other competing demands on DAC member time.

Most research grants allow investigators to apply for support for data collection and analysis, but rarely establish actual governance structures needed to stand up access management services. We found that executive buy-in was a major driver for ADS support in the cases of some repositories and the lack or administrative or leadership buy in a major detractor for others, namely repositories at smaller research institutions or laboratories. Therefore, part of the challenge of making ADS adoption a higher institutional priority is convincing institutional leadership of their added value and the net benefit of investing in data governance solutions and infrastructures generally.

Delaying infrastructure upgrades has consequences for the future utility of the repository in the longer term. Some of our study participants, for example, believed researchers were drawn to their databases not because of their data access policies and practices, but because of the quality and diversity of their datasets. However, this quality-driven perspective contrasts with findings from a study of genetic researchers suggesting that ease of access is at least marginally important when choosing a database for their research [ 28 ]. We reason that repositories which invest in efficient, scalable, and compliant access decision processes are likely to attract more users to their resources than repositories which do not evolve such processes to meet the pace of data generation and higher data demand. It is also worth noting that funders have a direct role to play in accelerating the pace of data science as researchers are expected to do more with fewer resources and in less time.

Developing more streamlined workflows emerged as a primary benefit that many participants anticipated from adopting ADS. Participants were most enthusiastic about applying ADS for time consuming and tedious tasks, such as preliminary review and quality control checks for data access request forms that are needed to initiate the data access decision process. Applying ADS to facilitate these workflows could free DAC members to dedicate more time to deliberate on more substantive ethics issues raised by data access requests.

While data governance has often been considered auxiliary work, new research findings and new U.S. federal government policies, such as the National Institutes of Health Data Management and Sharing (DMS) Policy, have elevated its importance by placing additional requirements for data sharing [ 29 ]. The new DMS Policy was but one example of distinct legislative reforms that have influenced cultures of data sharing shaping DAC work, as well as the institutional practices and governance tools developed to complement this culture. To be sure, such legislative and institutional context influenced participant responses and particular implementation preparedness factors for ADS such as “structural characteristics of the organization.”

The DMS Policy will accelerate the accumulation of an enormous number of datasets. In the absence of interventions, including but not limited to ADS, the DMS Policy will significantly raise costs associated with data storage and management. We concluded from our participants that databases/repositories are frequently developed specifically to share research data generated from federal funds without attention to existing databases and other resources in mind within which to deposit their data. “Blind” database creation is often done with good intentions; however, it can inadvertently introduce myriad access pathways that make the data effectively “shared” but undiscoverable and is another issue where ADS tools could intervene. One participant’s narrative about their need to transfer legacy data from a repository facing permanent closure puts the problem of unsustainable databases in sharp relief. The participant’s example suggested that there is need for more efficient and sustainable solutions for data access management and sharing that can endure even when repositories themselves do not. Moreover, there is reasonable cause to have a contingency plan for publicly funded data shared via non-publicly supported repositories in the event the repository closes or changes in policy or personnel. Standardized ADS solutions could easily interoperate between the two types of repositories and facilitate legacy data transfer, if and when required.

Limitations

Our results should be considered in light of several methodological limitations. While geographically diverse, many of our interview participants were affiliated with DACs based at large, well-resourced research institutions. It is likely that responses and perceptions of implementation factors related to ADS would differ substantially if more DACs from low- or under-resourced institutions were represented in our sample. Our data collection design relies on self-reports of institutional data access policy and procedures. Many interview participants were aware of the Global Alliance for Genomics and Health, and the data access committee review standards we were principally involved in developing [ 30 ]. Thus, while we endeavored to create a safe, open environment for participants to share their honest views, social desirability bias related to our prior work may have influenced how participants responded. Lastly, CFIR predefines sociological constructs relevant to implementation. Our analysis was therefore limited only to those constructs covered in the framework, whereas others might have emerged inductively if we adopted an alternative analytic frame.

In this article, we reported findings from semi-structured qualitative interviews with DAC members from around the world on the relevant barriers and facilitators of implementing ADS for genomic data access management. Our findings suggest there is general support for pilot studies that test ADS performance for certain tasks in data access management workflows, such as cataloging data types, verifying user credentials, and tagging datasets for use terms. Participants indicated that ADS should supplement, but not replace, DAC member work. This sentiment was especially strong with respect to tasks that were perceived to require sensitivity and human value-judgments such as privacy protections, group harms, and study purpose. Nonetheless, our findings offer cautious optimism regarding the ways in which algorithms, software, and other machine-readable ontologies could streamline aspects of DAC decision-making while also enabling new opportunities for improving consistency and fairness in DAC decisions.

To that end, we conclude with practical recommendations for institutional data stewards that are considering or have already implemented ADS for data access management. First, repositories and institutions that support databases and other resources should prioritize infrastructural upgrades and factor them into associated budgets. Ensuring proper investment in, and human/material resource support for, these upgrades ensures the repository can help ensure its utility even as the complexity and volume of genomic and associated health datasets grow. Second, DACs should prepare to put in place today what data access management and sharing processes they foresee the repository needing tomorrow. For DAC members looking to integrate ADS or other semi-automated tools into their workflows, buy-in from executive leadership should be obtained at the earliest stages of this transition. DAC members should consider substantiating the need for semi/automated solutions with concrete trend data about the frequency of data access requests relative to the time from request to decision and extrapolate these numbers to judge what the anticipated demand for repository will be in 1, 5, and 10 years. Tracking and transparently reporting data access request volume, access decisions, and other committee operations is likewise important not just for internal purposes, but also to demonstrate responsible data stewardship in action to prospective data contributors.

Third, DACs should refrain from implementing ADS wholesale without complementary human oversight of data access request intake and decisions. Pilot testing where ADS tools can be applied to the most time-consuming tasks will require taking inventory of the inputs required for each task along the data access decision workflow. Fourth, DACs should consider what human and material resources will be needed to integrate ADS effectively. These resources include DAC member expertise, computer equipment, and software development, not to mention member education and training resources. Finally, DACs should collaborate on setting standards for how data access requests should be adjudicated and tailor ADS tools in line with these consensus criteria. There is ongoing work to this effect as part of the Ethical Provenance Subgroup of the Global Alliance for Genomics and Health (including the development of an "Ethical Provenance Toolkit"); additional representation from repositories that steward other diverse health datasets would be ideal to coordinate access management strategies across the field.

The explosion in the volume and complexity of genomic and associated health data is converging with the need to manage access more efficiently to these data. Such trends point intuitively to solutions that can help to alleviate, or at least prevent bottlenecks in the access process to preserve the scientific and social value of data generated from public investments in research. To put ADS solutions to the test, future research should compare access decisions and their outcomes between institutions who do/not use such tools for data access management; and examine whether ADS delivers on its efficiency promises and whether it liberates DAC member time previously spent addressing procedural matters – allowing more opportunities for committee deliberation on substantive ethics issues.

Data availability

Materials described in the manuscript and data supporting our findings can be made available upon request. All requests should be directed to Vasiliki Rahimzadeh, PhD at [email protected].

Stephens ZD, Lee SY, Faghri F, Campbell RH, Zhai C, Efron MJ, Iyer R, Schatz MC, Sinha S, Robinson GE. Big Data: Astronomical or Genomical? PLoS Biol. 2015;13:e1002195. https://doi.org/10.1371/journal.pbio.1002195 .

Article   Google Scholar  

IQVIA Institute for Human Data. Science understanding the Global Landscape of genomic initiatives: Progress and Promise.

Cook-Deegan R, McGuire AL. Moving beyond Bermuda: sharing data to build a medical information commons. Genome Res. 2017;27:897–901. https://doi.org/10.1101/gr.216911.116 .

Evans B. (2019). Genomic data commons. Governing Privacy in Knowledge Commons, 75–105.

Contreras JL, Knoppers BM. The genomic commons. Annu Rev Genom Hum Genet. 2018;19:429–53. https://doi.org/10.1146/annurev-genom-083117-021552 .

Grossman RL. Ten lessons for data sharing with a data commons. Sci Data. 2023;10:120. https://doi.org/10.1038/s41597-023-02029-x .

Cheah PY, Piasecki J. Data Access committees. BMC Med Ethics. 2020;21:1–8. https://doi.org/10.1186/s12910-020-0453-z .

Murtagh MJ, Blell MT, Butters OW, Cowley L, Dove ES, Goodman A, Griggs RL, Hall A, Hallowell N, Kumari M, et al. Better governance, better access: practising responsible data sharing in the METADAC governance infrastructure. Hum Genomics. 2018;12. https://doi.org/10.1186/s40246-018-0154-6 .

Lawson J, Rahimzadeh V, Baek J, Dove ES. Achieving Procedural Parity in Managing Access to genomic and related Health data: A Global Survey of Data Access Committee members. Biopreserv Biobank. 2023;bio20220205. https://doi.org/10.1089/bio.2022.0205 .

Shabani M, Thorogood A, Borry P. Who should have access to genomic data and how should they be held accountable? Perspectives of Data Access Committee members and experts. Eur J Hum Genet. 2016;24:1671–5. https://doi.org/10.1038/ejhg.2016.111 .

Ramos EM, Din-Lovinescu C, Bookman EB, McNeil LJ, Baker CC, Godynskiy G, Harris EL, Lehner T, McKeon C, Moss J, A Mechanism for Controlled Access to GWAS Data: Experience of the GAIN Data Access Committee. Am J Hum Genet. 2013;92:479–88. https://doi.org/10.1016/j.ajhg.2012.08.034 .

Shabani M, Knoppers BM, Borry P. From the principles of genomic data sharing to the practices of data access committees. EMBO Mol Med. 2015;7:507–9. https://doi.org/10.15252/emmm.201405002 .

Mello MM, Triantis G, Stanton R, Blumenkranz E, Studdert DM. Waiting for data: barriers to executing data use agreements. Science. 2020;367:150–2. https://doi.org/10.1126/science.aaz7028 .

Lawson J, Cabili MN, Kerry G, Boughtwood T, Thorogood A, Alper P, Bowers SR, Boyles RR, Brookes AJ, Brush M, et al. The Data Use Ontology to streamline responsible access to human biomedical datasets. Cell Genomics. 2021;1:100028. https://doi.org/10.1016/j.xgen.2021.100028 .

Lawson J. Empirical validation of an Automated Approach to Data Use Oversight. Cell Genomics Forthcoming; 2021.

Van der Auwera G, O’Connor B. Genomics in the Cloud: using Docker, GATK, and WDL in Terra. 1st ed. O’Reilly Media; 2020.

Birger C, Hanna M, Salinas E, Neff J, Saksena G, Livitz D, Rosebrock D, Stewart C, Leshchiner I, Baumann A, et al. FireCloud, a scalable cloud-based platform for collaborative genome analysis: strategies for reducing and controlling costs. (Bioinformatics). 2017. https://doi.org/10.1101/209494 .

Rahimzadeh V, Lawson J, Rushton G, Dove ES. Leveraging algorithms to improve decision-making workflows for genomic data Access and Management. Biopreserv Biobank. 2022. https://doi.org/10.1089/bio.2022.0042 .

Data Use Ontology. (2022).

Lawson J, Ghanaim EM, Baek J, Lee H, Rehm HL. Aligning NIH’s existing data use restrictions to the GA4GH DUO standard. Cell Genomics. 2023;3:100381. https://doi.org/10.1016/j.xgen.2023.100381 .

Rahimzadeh V, Lawson J, Baek J, Dove ES. Automating justice: an ethical responsibility of computational Bioethics. Am J Bioeth. 2022;22:30–3. https://doi.org/10.1080/15265161.2022.2075051 .

Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447–53. https://doi.org/10.1126/science.aax2342 .

Hartzog W, Conti G, Nelson J, Shay LA. Inefficiently Automated Law Enforcement. Mich State Law Rev. 2016;1763:1763–96.

Google Scholar  

Ledford H. Millions affected by racial bias in healthcare algorithm. Nature. 2019;574:608–10.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4. https://doi.org/10.1186/1748-5908-4-50 .

Rahimzadeh V, Serpico K, Gelinas L. Institutional review boards need new skills to review data sharing and management plans. Nat Med. 2023;1–3. https://doi.org/10.1038/s41591-023-02292-w .

Cabili MN, Lawson J, Saltzman A, Rushton G, O’Rourke P, Wilbanks J, Rodriguez LL, Nyronen T, Courtot M, Donnelly S, et al. Empirical validation of an automated approach to data use oversight. Cell Genomics. 2021;1:100031. https://doi.org/10.1016/j.xgen.2021.100031 .

Trinidad MG, Ryan KA, Krenz CD, Roberts JS, McGuire AL, De Vries R, Zikmund-Fisher BJ, Kardia S, Marsh E, Forman J, et al. Extremely slow and capricious: a qualitative exploration of genetic researcher priorities in selecting shared data resources. Genet Sci. 2023;25:115–24. https://doi.org/10.1016/j.gim.2022.09.003 .

Final NIH. Policy for Data Management and Sharing https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html .

Global Alliance for Genomics and Health. (2021). Data Access Committee Guiding Principles and Procedural Standards Policy.

Download references

Acknowledgements

The authors wish to thank members of the Data Access Committee Review Standards Working Group, and the Regulatory and Ethics Work Stream of the Global Alliance for Genomics and Health for their contributions to the intellectual community that inspired this work.

This study was funded by the National Human Genome Research Institute as an Administrative Supplement grant to the AnVIL program for the Study of Bioethical Issues [U24HGO10262].

Author information

Authors and affiliations.

Center for Medical Ethics and Health Policy, Baylor College of Medicine, 1 Baylor Plaza, Suite 310DF, Houston, TX, 77098, USA

Vasiliki Rahimzadeh

Broad Institute of MIT and Harvard, Cambridge, MA, USA

Jinyoung Baek & Jonathan Lawson

School of Law, University of Edinburgh, Edinburgh, UK

Edward S. Dove

You can also search for this author in PubMed   Google Scholar

Contributions

Authors VR, JL and ED conceptualized, designed, and carried out the study. Author JB led in the data collection and analysis and drafting of early manuscript drafts. All authors, VR, JB, JL and ED took part in writing and editing the manuscript, responding to peer reviewer comments and approved the final version.

Corresponding author

Correspondence to Vasiliki Rahimzadeh .

Ethics declarations

Ethical approval.

This study was reviewed and approved the Stanford University Institutional Review Board. All participants were informed of the purpose of the study, funding, risks and benefits at the time of invitation. Informed consent was obtained from all participants prior to the interview and participants were provided opportunities to ask any questions about the study procedures.

Consent for publication

Not applicable.

Competing interests

All authors are members of the Regulatory and Ethics Work Stream of the Global Alliance for Genomics and Health. JL is co-lead of the Data Use Ontology, leads the Data Use Oversight System and is a member of the Broad Institute Data Access Committee.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Rahimzadeh, V., Baek, J., Lawson, J. et al. A qualitative interview study to determine barriers and facilitators of implementing automated decision support tools for genomic data access. BMC Med Ethics 25 , 51 (2024). https://doi.org/10.1186/s12910-024-01050-y

Download citation

Received : 09 January 2024

Accepted : 26 April 2024

Published : 05 May 2024

DOI : https://doi.org/10.1186/s12910-024-01050-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Data access committee
  • Implementation science
  • Decision support
  • Genomic data

BMC Medical Ethics

ISSN: 1472-6939

qualitative interview dissertation example

IMAGES

  1. (PDF) The qualitative research interview

    qualitative interview dissertation example

  2. Coding interview transcript: Qualitative inquiry

    qualitative interview dissertation example

  3. How to Visualize Qualitative Data

    qualitative interview dissertation example

  4. (PDF) From Qualitative Dissertation to Quality Articles: Seven Lessons

    qualitative interview dissertation example

  5. 18 Qualitative Research Examples (2024)

    qualitative interview dissertation example

  6. Qualitative Interview Techniques and Considerations

    qualitative interview dissertation example

VIDEO

  1. Qualitative Interview Video

  2. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  3. podcast show, interview, highlights of career

  4. LET'S CATCH UP AGAIN!

  5. Using Qualitative Methods in Evaluation, by Professor Dwayne Devonish UWI, Barbados, October 2021

  6. Qualitative Interview EDUC 520

COMMENTS

  1. A Qualitative Case Study of Students' Perceptions of Their Experiences

    qualitative research professor. I was positive that I would design a quantitative research study but the qualitative courses in the program highlighted the merits of qualitative research. Dr. Cozza and Ms. Rosaria Cimino, thanks for the advisement support. To all the Ed.D. candidates that I encountered on my academic journey, especially my

  2. PDF A Qualitative Study of Pinterest Users' Practices and Views

    lacking, however, is a qualitative study on the uses and views of Pinterest users, as well as an exploration of this particular intersection of gender and technology. This dissertation aims to fill this gap in the literature. Fourteen Pinterest users were interviewed for this study, and their interviews transcribed and analyzed.

  3. PDF Student Engagement: a Qualitative Study of Extracurricular Activities

    generated through semi-structured interviews with students, teachers and the principal, and observations of four extra-curricular clubs. The data revealed that clubs play important roles in students' educational lives. The study found that students were empowered in all the clubs'

  4. How Do You Incorporate an Interview into a Dissertation?

    Including interviews in your dissertation. To present interviews in a dissertation, you first need to transcribe your interviews. You can use transcription software for this. You can then add the written interviews to the appendix. If you have many or long interviews that make the appendix extremely long, the appendix (after consultation with ...

  5. PDF Appendix 1: Semi-structured interview guide

    The interviews will last approximately 20 to 40 minutes or as long as you would like to talk about your experience. With your permission, the interview will be audio-recorded. You can stop the interview at any time, and you do not have to answer a particular question if you don't want to. Where will the interview take place?

  6. PDF A Sample Qualitative Dissertation Proposal

    word guidelines to highlight the flexibility of this qualitative analytic method. These guidelines. are (1) familiarizing yourself with your data, (2) generating initial codes, (3) The researcher read. throughout each transcript to immerse in the data, (4) reviewing themes, (5) defining and naming.

  7. How To Do Qualitative Interviews For Research

    If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel, delay, or not produce useful data. 5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process.

  8. Types of Interviews in Research

    Types of Interviews in Research | Guide & Examples. Published on March 10, 2022 by Tegan George. Revised on June 22, 2023. An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions.

  9. PDF © Katie Shay Jalma, December/2008

    different recruitment pools. The primary researcher conducted face-to-face interviews with all of the participants. The interviews were analyzed by a research team of three judges employing a qualitative research methodology guided by Consensual Qualitative Research (CQR; Hill, et al, 1997; Hill et al., 2005). The analysis was reviewed by an

  10. Tips for a qualitative dissertation

    Qualitative researchers move back and forward between their dataset and manuscript as their ideas develop. This enriches their understanding and allows emerging theories to be explored. 6) Move beyond the descriptive. When analysing interviews, for example, it can be tempting to think that having coded your transcripts you are nearly there.

  11. PDF Interviewing in Qualitative Research

    Qualitative interview is a broad term uniting semi-structured and unstructured interviews. Quali-tative interviewing is less structured and more likely to evolve as a natural conversation; it is of-ten conducted in the form of respondents narrating their personal experiences or life histories. Qualitative interviews can be part of ethnography ...

  12. PDF TIPSHEET QUALITATIVE INTERVIEWING

    Additionally, interviews can be tailored specifically to the knowledge and experience of the interviewee. Designing and structuring the interview Qualitative interviews can range from highly exploratory to addressing specific hypotheses. As a result, the structure of interviews can range from loose conversations to structured exchanges in

  13. Twelve tips for conducting qualitative research interviews

    Summary. The qualitative research interview is a powerful data-collection tool which affords researchers in medical education opportunities to explore unknown areas of education and practice within medicine. This paper articulates 12 tips for consideration when conducting qualitative research interviews, and outlines the qualitative research ...

  14. Presenting Findings (Qualitative)

    Evidence takes the form of quotations from interviews and excerpts from observations and documents. ... Numbers are helpful and should not be avoided simply because this is a qualitative dissertation. Example Martinez-Kellar Dissertation, p. 140-144 (Individual Leader Element: Leader Creativity) Waite Phillips Hall 3470 Trousdale Parkway

  15. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  16. Qualitative research

    Example 1: This dissertation will adopt a case-study approach, exploring three distinct projects to improve sustainability in social housing at a local, national and international level. By comparing sustainability policies, legislation and design principles across these otherwise unconnected studies, this project aims to investigate whether a ...

  17. Chapter 11. Interviewing

    Introduction. Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow ...

  18. Dissertations 4: Methodology: Methods

    Examples: Interviews (face-to-face or telephonic), Online surveys, Focus groups and Observations ... This short video on qualitative interviews discusses best practices and covers qualitative interview design, ... Dissertations and project reports: a step by step guide. Hampshire, England: Palgrave Macmillan. Lombard, E. (2010). Primary and ...

  19. Qualitative Data Coding 101 (With Examples)

    For example, in the sentence: "Pigeons attacked me and stole my sandwich.". You could use "pigeons" as a code. This code simply describes that the sentence involves pigeons. So, building onto this, qualitative data coding is the process of creating and assigning codes to categorise data extracts. You'll then use these codes later down ...

  20. Dissertation Results & Findings Chapter (Qualitative)

    The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...

  21. PDF Chapter 4 Qualitative

    This chapter will outline the qualitative data collection methods used, describe the analytic techniques employed as well as presenting the findings from this phase of the research study. The findings will be fully discussed with links to current literature identified in Chapter 1. The characteristics of the research participants have been ...

  22. Structuring a qualitative findings section

    Don't make the reader do the analytic work for you. Now, on to some specific ways to structure your findings section. 1). Tables. Tables can be used to give an overview of what you're about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme.

  23. Dissertation examples

    Dissertation examples. Listed below are some of the best examples of research projects and dissertations from undergraduate and taught postgraduate students at the University of Leeds We have not been able to gather examples from all schools. The module requirements for research projects may have changed since these examples were written.

  24. A qualitative interview study to determine barriers and facilitators of

    Data access committees (DAC) gatekeep access to secured genomic and related health datasets yet are challenged to keep pace with the rising volume and complexity of data generation. Automated decision support (ADS) systems have been shown to support consistency, compliance, and coordination of data access review decisions. However, we lack understanding of how DAC members perceive the value ...