Are those bold sustainability claims real or just greenwashing? The truth isn’t always obvious. Here’s how to spot the difference and take action: Sustainability is the buzzword of the decade, but not every “green” initiative is what it claims to be. For some companies, sustainability goals are less about driving real change and more about crafting a feel-good narrative to stay relevant. Here’s how to tell if a company’s sustainability goals are authentic or performative: 🚩 Red Flags of Performative Sustainability 1. No Clear Metrics: Vague promises like “net-zero by 2050” with no transparent roadmap or interim milestones. Example: Companies announcing climate neutrality without detailing how they’ll achieve it. Often, this means buying carbon offsets (sometimes dubious) instead of reducing actual emissions. 2. Cherry-Picked Wins: Highlighting small, flashy changes (e.g., eliminating plastic straws) while ignoring their larger environmental footprint. Example: Fast fashion brands touting “sustainable collections” while producing billions of garments annually with no commitment to reducing overall production. 3. ESG Reporting Gaps: Slick sustainability reports that focus on aesthetics but offer little substance on their environmental or social impact. ✅ Signs of Genuine Sustainability Goals 1. Ambitious, Measurable Targets: Companies that set specific, science-based goals and regularly update progress. Example: Microsoft’s goal to become carbon negative by 2030, backed by aggressive investments in renewable energy and carbon capture technology. 2. Systemic Change: Organizations working to transform their entire supply chain or business model for sustainability. Example: Patagonia’s commitment to a circular economy by offering repair services and prioritizing recycled materials. What Can You Do as a Professional Today? 1. Ask Hard Questions: Look beyond marketing jargon. If a company claims “we’re committed to sustainability,” ask: How do you measure progress? What specific actions have been taken? How does this align with your business model? 2. Challenge Your Own Workplace: Push for real accountability by advocating for transparency in ESG goals. Suggest using frameworks like the Science Based Targets initiative (SBTi) or the Global Reporting Initiative (GRI) to keep your company honest. What’s your take? How can we move from talk to impact? With purpose and impact, Mario
Avoiding climate change data cherry-picking
Explore top LinkedIn content from expert professionals.
Summary
Avoiding climate change data cherry-picking means using all relevant evidence instead of selectively highlighting information that supports a specific viewpoint while ignoring other data. This practice is key for honest reporting and helps ensure trustworthy decisions about climate action.
- Check full context: Always examine both the broader data set and any missing information before drawing conclusions or sharing climate facts.
- Ask for transparency: Request detailed explanations about how climate numbers were gathered, what was measured, and any limitations to help spot selective reporting.
- Question bold claims: Challenge statements that sound impressive but lack clear metrics or ignore larger environmental impacts, focusing on meaningful progress over flashy headlines.
-
-
Carbon accounting isn’t magic. It’s disciplined plumbing. I just finished a dense methodology report and here’s the real takeaway - what good looks like: 1. Boundaries first, bragging later. List everything you influence: owned fuel, power, upstream stuff you buy, downstream stuff customers do, end‑of‑life. If you skip customer trips or device disposal, you’re painting abs with no legs. 2. Hybrid > purity wars. Start broad with spend/EIO factors to catch the long tail fast. Then surgically swap in process LCAs + metered data for big emitters. Coverage then precision. 3. Transport: go modal or go home. Distance + fuel if you have it; spend only as a fallback. Include last‑mile, air hops and those “invisible” customer store runs. That category is the un-invited guest eating your Scope 3 snacks. 4. Energy = location + market lenses. Location-based tells reality. Market-based shows your procurement hustle. Report both. Stop cherry‑picking. 5. Well‑to‑wheel always. Tailpipe only is vintage 2010. Upstream extraction & refining sits there waving hi. 6. Packaging & devices need digital twins. SKU or BOM parameter sets (mass, material, % recycled, power profile) → automated manufacturing + use‑phase emissions. No more “generic plastic part”. 7. Refrigerants are stealthy bullies. Tiny leaks × huge GWP = outsized hit. Track charge, leak %, gas type. Simple math. Big impact. 8. People emissions matter. Business travel = distance × cabin class. Commuting = survey + mode split. Don’t hand‑wave “low materiality” without data proving it. 9. Rebuild history when methods improve. Consistent comparable > fake year‑on‑year “progress” caused by changing factors behind the curtain. 10. Assurance mindset early. Pretend an auditor is sitting behind you while you structure data lineage. Future you will send past you a thank‑you coffee. Smell test for a credible footprint: Multi‑model, boundary‑explicit, category‑granular, evidence‑logged, annually rebaselined. If your master file is still a single Excel tab… you’re estimating, not accounting. My rule of thumb: If you can’t trace any final number back to a line of raw data or a published factor in ≤ 90 seconds, it’s not ready for the board (or for claims in a sustainability report). Let’s raise the bar. 🌍 #CarbonAccounting #GHGProtocol #LCA #Decarbonization #ClimateData #Sustainability #NetZero #DataEngineering #ESG #Scope3
-
🧪 How I Avoid Misleading Results in Research Paper Submissions Putting a research paper together? Avoid these pitfalls by following 6 key steps I use to ensure integrity, transparency, and credibility in every study: --- 1️⃣ Use Reliable Data Sources • Always collect data from peer‑reviewed, official, or validated databases like Scopus, PubMed, Google Scholar, WoS. • Steer clear of unverified websites, biased samples, or incomplete datasets. ✅ This is the foundation of trust in research integrity. --- 2️⃣ Apply Correct Methodology • Choose defensible statistics & research methods appropriate to your design. • Collaborate with domain experts or statisticians to catch potential flaws before they derail analysis. --- 3️⃣ Report ALL Results—not just the positive ones • Avoid hiding inconvenient data—share both significant and non‑significant findings. • Transparency builds scientific credibility and trust—don’t cherry‑pick. --- 4️⃣ Avoid Overfitting & Data Manipulation • Don’t tweak data or model outputs to force-fit your hypothesis. • Say no to practices like p‑hacking, cherry‑picking, or fabricating outliers. --- 5️⃣ Disclose Limitations Clearly • Be upfront about limitations: small sample size, bias risks, methodological constraints. • Explain how those limitations may affect your conclusions to contextualize your results honestly. --- 6️⃣ Follow Ethical Review & Peer Input • Submit your study for peer-review or advisor feedback. • Use plagiarism tools like Turnitin or Grammarly. • Declare any conflicts of interest or funding sources. --- ⚠ Pitfalls to Avoid • Selectively excluding data • Ignoring failed or null experiments • Misinterpreting statistical outputs • Falsifying or fabricating results --- ✨ Why does this approach matter? These principles help ensure that research withstands scrutiny, promotes reproducibility, and fosters credibility in your academic community. Quality over hype every time. --- 🧠 What’s your process for avoiding misleading results in research? Share a tip or a challenge you’ve faced below 👇
-
🚩 How To Flag Misleading and Dishonest Charts (https://lnkd.in/e9cB8r4E), a practical guide on how to spot misleading charts to communicate insights more accurately and more reliably — with plenty of examples and design guidelines to create honest charts. Kindly put together by Nathan Yau. 🚫 Charts aren’t merely a visual representation of data. ✅ Charts are visuals that have a specific job to do. ✅ Don’t cut bar chart baselines — always start at 0. ✅ Don’t expand the y-axis beyond the max value. ✅ Don’t choose narrow segments to highlight a point. 🤔 Beware of smooth operator as it often hides real data. 🚫 Correlation doesn’t mean causation: validate and verify. ✅ Don’t add time gaps in the timeline: it hides what happened. ✅ Avoid leading titles, as people use them to interpret data. We often think of charts as visual representation of data. But as Nick Desbarats says, charts are visuals that have a job to do — e.g. make people aware, take an action, find an answer, filter or look up values. To do that job well, they need to be honest. And if they don’t, they spread skewed and biased messages, fast. Charts combine visual encodings (e.g. color, area, position, direction, length, angle) with scales. If the data is scarce, visual encodings fill a space based on available data — against the scales we choose to use for it. If the scales are chosen unfairly, or the data is cherry-picked, charts tell a wrong story. Here are some of the common attributes of dishonest charts: 🎢 Slopes → Artificial steepness of lines suggests notable changes. 🚢 Damper → Values appear smaller if y-axis expands beyond max. 🍒 Cherrypicker → Choosing narrow segments to highlight a point. 🌊 Smooth operator → Avgs show patterns, but hide bumps in reality. 🗑️ Overbinner → Clumping data into general groups to hide diversity. 👀 Base Stealer → Shortened y-axis makes tiny differences seem large. 🦋 Probable Cause → Showing 2 things follow similar/opposing patterns. ⏰ Time Gap → Points in time are purposely selected, others left out. 🔥 Storyteller → Leads with narratives, then squeezes data to support. 📇 Descriptor → Words chosen to deflect or invite misinterpretations. Different design choices lead to different charts, along with different interpretations attached to them. And that interpretation is often linked to what a reader already knows, what they expect, or what they choose to believe. The purpose of a good chart is to make wrong interpretations less likely. Unfortunately, there are plenty of charts that intentionally invite wrong interpretations. So be careful in choosing the data set to rely on, check sources, and explore not only what is there, but also what is missing. As Nathan suggests, a single data set can represent infinite narratives, depending on the angle you look from. So be cautious about the story you are telling, and avoid common but dishonest attributes that always invite wrong conclusions. #ux #design
-
This guide maps the minefield of science reporting, splitting problems into unintentional errors (e.g., confusing correlation with causation, oversimplifying complex findings, ignoring study limitations or statistical significance) and deliberate manipulations (sensational headlines, cherry-picked data, publication-bias exploitation, even fabrication). It offers a practical checklist for journalists and scientists alike: provide context (methods, sample size, limitations), report uncertainty (effect sizes, confidence intervals—not just p-values), avoid hype and absolutist claims, disclose conflicts, and link to sources and datasets. The aim is simple but urgent—prevent the public from being misled and rebuild trust through accuracy, transparency, and responsible communication. #learningdispatch #strategy #BoardOfDirectors #BusinessStrategy #LeadershipDevelopment #ExecutiveCoaching #BusinessLeaders #LeadershipMatters
-
In climate science, precision and clarity are crucial. We know with high confidence that global temperatures are rising and that this change is closely linked to rising CO2 levels. But how do we quantify and prove these trends? One key metric is the temperature anomaly, the difference between a location’s temperature at a given time and a reference value. This method helps smooth out local variations and gives us a more consistent picture of temperature changes across large regions. 🔍 Why focus on anomalies? Direct temperature measurements can vary significantly over short distances due to local factors like urbanization. However, anomalies show consistent trends across much larger areas. For example, if central Madrid experiences a temperature increase of 0.5°C, the surrounding regions will likely experience a similar rise, even if their baseline temperatures differ. This allows scientists to construct a reliable global temperature record from fewer data points. 📈 The Big Picture: Over the past 130 years, temperature anomalies show a clear upward trend. This aligns with the increase in atmospheric CO2, and the data strongly suggest that this trend is driven by human activity. Consider this relationship: Temperature change (ΔT) = Climate sensitivity (k) × ln(Current CO2 / Reference CO2) This simple yet powerful model shows how CO2 concentration influences global temperatures. The model, first developed over a century ago, accurately predicted the warming we are now observing. 🔄 What about skeptics? There are claims that global warming has slowed or even paused. This misconception arises from cherry-picking data over short periods. However, when we examine long-term trends, the evidence is clear: global temperatures are rising. Short-term fluctuations don’t alter the overall pattern, which shows a steady increase in temperature anomalies over time.