Who says you can't have validity and reliability in longitudinal case studies? Not me! A trope about qualitative work is that validity and reliability are not possible. That's simply untrue. Despite publications to the contrary, I still hear the trope repeated again and again by quants. So. As a reminder. Christopher Street and Kerry Ward, PhD wrote a nice paper on evaluating (and ensuring) validity and reliability in longitudinal case studies more than a decade ago. They point out that authors can rely on the attributes of temporality, e.g., the longitudinal form of the data, to estimate validity. By considering (1) how to segment data into time chunks, (2) length of timeline, and (3) what time period should be in the data, authors can provide a convincing case for the validity of their analysis. As a bonus, they include some thoughts on time reliability e.g., would a coder have coded data the same way. If you are doing qualitative, longitudinal work, this is a good paper to have in your backpocket when questioned about validity and reliability! Give it a look! The citation: Street, C. T., & Ward, K. W. (2012). Improving validity and reliability in longitudinal case study timelines. European journal of information systems, 21(2), 160-175. The link: https://lnkd.in/e_ZVYtdw The abstract: Management Information Systems researchers rely on longitudinal case studies to investigate a variety of phenomena such as systems development, system implementation, and information systems-related organizational change. However, insufficient attention has been spent on understanding the unique validity and reliability issues related to the timeline that is either explicitly or implicitly required in a longitudinal case study. In this paper, we address three forms of longitudinal timeline validity: time unit validity (which deals with the question of how to segment the timeline – weeks, months, years, etc.), time boundaries validity (which deals with the question of how long the timeline should be), and time period validity (which deals with the issue of which periods should be in the timeline). We also examine timeline reliability, which deals with the question of whether another judge would have assigned the same events to the same sequence, categories, and periods. Techniques to address these forms of longitudinal timeline validity include: matching the unit of time to the pace of change to address time unit validity, use of member checks and formal case study protocol to address time boundaries validity, analysis of archival data to address both time unit and time boundary validity, and the use of triangulation to address timeline reliability. The techniques should be used to design, conduct, and report longitudinal case studies that contain valid and reliable conclusions.
Validity and Reliability in Research
Explore top LinkedIn content from expert professionals.
Summary
Validity and reliability in research are core concepts that help ensure findings are truthful and consistent. Validity means the research accurately reflects what it intends to study, while reliability is about getting the same results if the study were repeated under similar conditions.
- Clarify your methods: Document every step and decision in your research process to make it easy for others to follow or replicate your work.
- Check your findings: Use techniques like member feedback or triangulation to verify that your conclusions truly represent participants’ experiences.
- Minimize bias: Keep an audit trail or engage in reflexive journaling to show that your results are shaped by the data, not by personal opinions.
-
-
🔍Reliability Vs Validity In qualitative research In qualitative research reliability and validity are both essential to ensuring the trustworthiness and rigor of a study, but they refer to different aspects of the research quality: 🔹 Reliability – Consistency and Dependability Definition: Reliability refers to the consistency of the research process and findings. It asks whether the study would produce similar results if repeated in the same context with similar participants. Key Question: Would another researcher, using the same methods in the same context, arrive at similar findings? Example: If a researcher uses a thematic analysis approach and another researcher, using the same coding steps, identifies the same themes from interview transcripts, the process is considered reliable. Strategies to Enhance Reliability: Clear documentation of methods and decisions Inter-coder agreement Audit trails Reflexive journaling 🔹 Validity – Accuracy and Credibility Definition: Validity is about the truthfulness or credibility of the findings. It addresses whether the research accurately captures participants’ meanings, experiences, and the phenomena being studied. Key Question: Do the findings truly represent the participants’ perspectives? Example: If interviews with rural tourism stakeholders lead to themes about sustainability that align with their lived experiences, and these interpretations are verified through participant feedback, the study demonstrates high validity. Strategies to Enhance Validity: Triangulation (data sources, methods, researchers) Member checking Thick description Prolonged engagement with participants
-
Epidemiology and statistics are often seen as sister quantitative disciplines. 👯♀️ Historically, however, epidemiology evolved to tackle challenges in observational data 📊, addressing issues of truth, chance, and bias that threaten validity. Statisticians, especially those working with observational data, need to grasp key epidemiological principles to ensure valid analyses. 🔍 1️⃣ Understanding of Confounding 🧩 Including measured and residual confounding, and dealing with the trade-offs between precision and validity. 2️⃣ Decisions at the design vs analysis stage ⚖️ Know which bias controls fit observational data—design-stage (like exclusion) or analysis-stage (like propensity scores). Analysis fixes often signal limitations in study design. 3️⃣ Inclusion and exclusion criteria are weighty decisions 🚪 These aren’t just checklists. When setting eligibility rules, watch for selection bias ⚠️ (e.g., conditioning on a collider and introducing selection bias when the inclusion criterion is a factor related to the exposure and the confounder, e.g., studying smoking cessation and weight loss but limiting to gym members—tied to health consciousness, a confounder). 4️⃣ Differential vs non-differential misclassification 📏 Understand measurement bias—differential or not—and how to test and fix it. 🛠️ 5️⃣ Missing Values Handling ❓ It’s not one-size-fits-all or always multiple imputation. Know the philosophy, assumptions, and implications on validity. 🤔 6️⃣ Drawing random samples and principles of weighting 🎲 Beyond simple random sampling, master cluster sampling, assess if selection/non-response is differential, and correct it. 7️⃣ Clinical Study Design in Outbreaks 🏥 Understand the design of clinical studies in a way that addresses issues of ethics in allocation, especially during outbreaks. For example, when a new vaccine is developed, we can’t withhold treatment or issue a placebo. Designs like step-wedge become very important—everybody gets the treatment eventually, but in a staggered manner, so we observe treatment versus non-treatment person-time. ⏳ 8️⃣ Rates vs. Proportions ⏱️ Understand the difference between rates and proportions, especially in a world where so many things are called “rates” that aren’t true rates—like survival rates, response rates, prevalence rate, and many others that are "fake" rates. Understand person-time, e.g., that following 1 person for 10 y = following 10 people for 1 y. 📅 9️⃣ Robust Analysis 💪 Robust isn’t complex code—it’s picking the right method to find truth, context by context. 🎯 🔟 Communication 🗣️ It’s not just numbers. Master data communication with incomplete info, balancing technical precision with clarity for media and public, conveying uncertainty well. 📢 📌 Bottom line: These principles help cut through the messy tangle of truth, chance, and bias. The mission? Extract the raw truth, strip away chance and bias, and deliver it clear as day. 💬 #Chisquares #VillageSchool
-
🚨 𝗗𝗮𝗻𝗴𝗲𝗿𝗼𝘂𝘀 𝗔𝗜 𝗶𝗻 𝗛𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲: 𝘈𝘤𝘤𝘶𝘳𝘢𝘵𝘦 enough to trust, 𝘗𝘳𝘦𝘤𝘪𝘴𝘦 enough to harm! When people hear that an AI model is “good,” they often ask one question: Is it accurate? But accuracy is only half the story. The other half is precision. A simple way to think about this is: 👉 Accuracy = validity (are we right?) 👉 Precision = reliability (are we consistent?) Using patient risk assessment as an example, here are the four possible combinations. 𝟭. 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗮𝗻𝗱 𝗽𝗿𝗲𝗰𝗶𝘀𝗲 (𝘃𝗮𝗹𝗶𝗱 𝗮𝗻𝗱 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲) The model consistently identifies the right patients as high risk for a particular outcome. Predictions are stable, trustworthy, and useful in practice. This is the goal. 𝟮. 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗽𝗿𝗲𝗰𝗶𝘀𝗲 (𝘃𝗮𝗹𝗶𝗱, 𝗯𝘂𝘁 𝘂𝗻𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲) On average, the model gets the numbers right, but individual predictions jump around. One day a patient is flagged as high risk, the next day they aren’t. The model may look good in aggregate, but it’s hard to rely on clinically for an individual patient. 𝟯. 𝗣𝗿𝗲𝗰𝗶𝘀𝗲 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝘁𝗲 (𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲, 𝗯𝘂𝘁 𝗶𝗻𝘃𝗮𝗹𝗶𝗱) The model is consistent, but consistently wrong. It repeatedly flags the same type of patient as high risk, even though they rarely develop complications. This often points to a systematic issue like biased data. 𝟰. 𝗡𝗲𝗶𝘁𝗵𝗲𝗿 𝗮𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗻𝗼𝗿 𝗽𝗿𝗲𝗰𝗶𝘀𝗲 (𝗻𝗲𝗶𝘁𝗵𝗲𝗿 𝘃𝗮𝗹𝗶𝗱 𝗻𝗼𝗿 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲) Predictions are inconsistent and wrong. The model provides no real value. 🔎 𝗛𝗼𝘄 𝗧𝗲𝗮𝗺𝘀 𝗙𝗶𝗻𝗱 𝘁𝗵𝗲 𝗦𝘄𝗲𝗲𝘁 𝗦𝗽𝗼𝘁 Data scientists test models across different patient groups, adjust thresholds, improve data quality, and choose evaluation metrics that reflect real clinical goals. The aim is not just to be right on average, but to be right and dependable for every patient. 𝗪𝗵𝘆 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 𝗦𝗵𝗼𝘂𝗹𝗱 𝗖𝗮𝗿𝗲 In healthcare, AI that isn’t both accurate (valid) and precise (reliable) creates risk, erodes trust, and can cause harm that often goes unnoticed. 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 🎯 Models must be both accurate 𝘢𝘯𝘥 precise or not used at all!
-
Ensuring Quality in Qualitative Research – The Four Pillars of Rigor In qualitative research, ensuring the quality of your findings isn’t just important—it’s essential. 🌟 Whether you’re exploring human experiences or uncovering complex patterns, demonstrating credibility, transferability, dependability, and confirmability sets the foundation for rigor. But how do you do it? 🤔 Here are four simple questions every qualitative researcher should answer: 1️⃣ Credibility How do you ensure your findings accurately represent participants’ realities? Methods like member checking, prolonged engagement, or triangulation help verify that your interpretations are true to the voices of your participants. 2️⃣ Transferability How can readers determine if your findings apply to their context? Providing rich, thick descriptions of your research setting and participants allows readers to make informed judgments about applicability. 3️⃣ Dependability What steps ensure that your study process is consistent and repeatable? An audit trail, peer reviews, and detailed documentation ensure your methodology is transparent and replicable. 4️⃣ Confirmability How do you show that your findings reflect participants' experiences and not researcher bias? Reflexive journaling, external audits, and a clear audit trail demonstrate that you’ve minimized the impact of personal bias on your findings. 🔍 These questions not only guide your research process but also communicate its rigor and transparency to your audience. Remember, your readers need to trust that your research is grounded, applicable, and methodologically sound. 💡 What strategies do you use to address these pillars in your research? Let’s exchange ideas in the comments!👇 #QualitativeResearch #ResearchMethodology #EnsuringQuality #DataAnalysis #Academia
-
Criteria for Assessing and Ensuring Trustworthiness in Qualitative Research OnlineClassHelp.Net Trustworthiness in qualitative research ensures that findings are credible, transferable, dependable, and confirmable. These criteria, introduced by Lincoln and Guba (1985), help researchers establish rigor in qualitative studies. 📌 Key Criteria for Trustworthiness ✅ Credibility (Truthfulness of Data) 🧐 Ensures that the research accurately reflects participants' experiences Equivalent to internal validity in quantitative research How to enhance it? Prolonged engagement with participants Triangulation (using multiple data sources) Member checks (participants validate findings) ✅ Transferability (Applicability in Other Contexts) 🌍 Determines whether findings can apply to other settings Similar to external validity in quantitative research How to enhance it? Thick description (detailed context & background information) Purposive sampling (selecting participants relevant to the study focus) ✅ Dependability (Consistency of Findings) 🔁 Ensures research findings are stable and repeatable Comparable to reliability in quantitative research How to enhance it? Audit trail (documenting research processes) Stepwise replication (conducting repeated studies) Expert peer review ✅ Confirmability (Objectivity & Bias Control) 🎯 Ensures findings are neutral and unbiased Comparable to objectivity in quantitative research How to enhance it? Reflexivity (acknowledging researcher bias) Triangulation (cross-verifying data sources) Clear documentation of the research process 🔍 Final Takeaway By applying these four trustworthiness criteria, researchers can enhance qualitative findings' credibility, reliability, and applicability, ensuring greater confidence in their results. 💬 What strategies do you use to ensure trustworthiness in research? Let’s discuss below! 👇 #QualitativeResearch #Trustworthiness #ResearchMethods #DataValidity #AcademicWriting #ResearchEthics #Credibility #Transferability #Dependability #Confirmability #DataAnalysis #Triangulation #PeerReview #AuditTrail #ScientificResearch #Objectivity #Reliability #ThickDescription #Reflexivity #EvidenceBased