𝐎𝐩𝐞𝐧 𝐃𝐚𝐭𝐚 𝐂𝐥𝐢𝐦𝐚𝐭𝐞 𝐑𝐢𝐬𝐤 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 𝐓𝐨𝐨𝐥𝐬 – Deep Dive Last week, I shared a post on open data tools for climate risk assessment and their role in climate adaptation. Since it sparked some interest, here’s a follow-up: a closer look at some of the best tools out there. 🦍 UN Biodiversity Lab 🦍 Hosts an amazing 269 datasets on biodiversity, from habitat intactness and ecosystem resilience to socio-economic indicators. – Great extra: national biodiversity statistics for 193 countries. – One highlight (which is integrated into many tools): The „GLC_FCS30“ land-cover map with an incredible 30x30m resolution. ⛈️ WESR Climate ⛈️ I like the tool by the UN Environment Programme because it offers a great framework for analyzing climate change variables: “Drivers” and “Pressures” (what drives climate change), “States” (how it alters Earth's systems), “Impacts” (resulting societal risks) and even “Responses” (what do we do to mitigate them). 🏭 Global Infrastructure Risk Model and Resilience Index (GIRI) 🏭 A collection by the Coalition for Disaster Resilient Infrastructure of an incredible 113 up-to-date and granular datasets on climate risks to buildings and infrastructures. – Great extra: Country-level statistics on average annual losses by climate hazards and infrastructure category. 🏚️ GIS-ImmoRisk 🏚️ Not flashy, but the only tool I know that lets you export building-specific climate risk PDF reports. It even factors in asset details (size, roof shape, windows, …) to assess likely damages by climate hazards. (Covers only Germany.) ❗ Where can you find these and other open climate and nature risk tools? – Click "resources" on the UN Environment Programme's World Environment Situation Room’s website. – Have a look at the MapX tool examples by UNEP/GRID-Geneva. – See the partially free KanataQ tool list. (Thank you, Nawar!) – Check out the tools and resources list of the NOAA. (Thank you, Douglas!) ❗ I’d appreciate hearing your opinion on the tools in this post, which tools you'd recommend, and where to find more. Link to last week's post: https://lnkd.in/dv_GKW83
Leveraging Open Data
Explore top LinkedIn content from expert professionals.
-
-
Every year, organizations convince themselves they're on the verge of a data-driven renaissance, only to find themselves facing familiar challenges when December rolls around. Let’s make this year different! Year after year, companies hire specialists, license analytics platforms, and launch transformation initiatives, yet remain entangled in cumbersome spreadsheets, conflicting definitions, and isolated information. Even companies with cutting-edge tech stacks continue to wrestle with fragmented databases and incompatible data models—the legacies of countless tactical compromises. The key to finally tackle these issues is to realize that at its core, their root cause isn't technological, but human and organizational in nature. Messy and siloed data stems from misaligned incentives, entrenched cultural patterns, and expedient solutions that calcified into permanent architecture. When performance metrics are focused solely on operational targets and no rewards for data quality or sharing, information remains locked in departmental strongholds, each with their own language, priorities, and interests. Doing it differently starts with strategic planning, where business leaders tend to passionately debate product launches and expansion plans, only to later ask the data teams to provide the supporting data pipelines. Instead of being decision co-pilots, data teams become post-hoc service providers—a telltale sign of data's relegation to a support function. This year, give them their rightful place as a strategic driver. The path forward requires elevating data to the same strategic level as people, capital, and core products. Data must finally become the connective tissue binding everything together, not a mere byproduct of operations. This means rewarding data sharing, dismantling organizational gridlock, and redesigning culture around data as a strategic asset—all while systematically addressing the technical debt that holds innovation hostage. The good news? The path to meaningful change doesn't need another major technology investment to start with decisive steps: tie executive compensation to data quality metrics, establish empowered cross-functional data councils with real decision-making authority, and create data ownership roles that transcend departmental boundaries. For early-stage companies, this means embedding data professionals in product teams; for enterprises, it requires establishing federated data governance that effectively balances central control with departmental autonomy. The question isn't whether you'll invest in new tools—it's whether you'll finally dare to reshape the human systems and organizational architectures that determine your data destiny.
-
𝗔𝗜 𝗳𝗼𝗿 𝗚𝗢𝗢𝗗: 𝗡𝗔𝗦𝗔 𝗮𝗻𝗱 𝗜𝗕𝗠 𝗹𝗮𝘂𝗻𝗰𝗵 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗔𝗜 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝘄𝗲𝗮𝘁𝗵𝗲𝗿 𝗮𝗻𝗱 𝗰𝗹𝗶𝗺𝗮𝘁𝗲 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴! 🌍 (𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝗴𝗲𝘁 𝗺𝗼𝗿𝗲 𝘀𝗽𝗼𝘁𝗹𝗶𝗴𝗵𝘁 𝗽𝗹𝗲𝗮𝘀𝗲 𝗮𝗻𝗱 𝗡𝗢𝗧 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝗪𝗿𝗮𝗽𝗽𝗲𝗿!) In collaboration with NASA, IBM just launched Prithvi WxC an open-source, general-purpose AI model for weather and climate-related applications. And the truly remarkable part is that this model can run on a desktop computer. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗸𝗻𝗼𝘄: ⬇️ → The Prithvi WxC model (2.3-billion parameter) can create six-hour-ahead forecasts as a “zero-shot” skill – meaning it requires no tuning and runs on readily available data. → This AI model is designed to be customized for a variety of weather applications, from predicting local rainfall to tracking hurricanes or improving global climate simulations. → The model was trained using 40 years of NASA’s MERRA-2 data and can now be quickly tuned for specific use cases. And unlike traditional climate models that require massive supercomputers, this one operates on a desktop. Uniqueness lies in the ability to generalize from a small, high-quality sample of weather data to entire global forecasts. → This AI-powered model outperforms traditional numerical weather prediction methods in both accuracy and speed, producing global forecasts up to 10 days in advance within minutes instead of hours. → This model has immense potential for various applications, from downscaling high-resolution climate data to improving hurricane forecasts and capturing gravity waves. It could also help estimate the extent of past floods, forecast hurricanes, and infer the intensity of past wildfires from burn scars. It will be exciting to see what downstream apps, use cases, and potential applications emerge. What’s clear is that this AI foundation model joins a growing family of open-source tools designed to make NASA’s vast collection of satellite, geospatial, and Earth observational data faster and easier to analyze. With decades of observations, NASA holds a wealth of data, but its accessibility has been limited — until recently. This model is a big step toward democratizing data and making it more accessible to all. 𝗔𝗻𝗱 𝘁𝗵𝘀 𝗶𝘀 𝘆𝗲𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗽𝗿𝗼𝗼𝗳 𝘁𝗵𝗮𝘁 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗶𝘀 𝗼𝗽𝗲𝗻, 𝗱𝗲𝗰𝗲𝗻𝘁𝗿𝗮𝗹𝗶𝘇𝗲𝗱, 𝗮𝗻𝗱 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗮𝘁 𝘁𝗵𝗲 𝗲𝗱𝗴𝗲. 🌍 🔗 Resources: Download the models from the Hugging Face repository: https://lnkd.in/gp2zmkSq Blog post: https://ibm.co/3TDul9a Research paper: https://ibm.co/3TAILXG #AI #ClimateScience #WeatherForecasting #OpenSource #NASA #IBMResearch
-
🔁 It’s that time of the year: Reimagining Data Governance in the Age of AI — Our 2025 Year in Review 🤔 In 2025, generative AI continued to accelerate - while open data initiatives, and data access policies began to slow. This growing imbalance signals what we’ve started to call a data winter: rising demand for data, paired with governance models that are no longer keeping pace. Against this backdrop, The GovLab’s Data Program spent the year reimagining data governance for an AI-driven world. Across humanitarian response, health and wellbeing, local decision-making, and more, our work increasingly focused on the institutional and social conditions needed to keep data ecosystems accessible, legitimate and trustworthy in the age of AI. Four areas shaped much of our work in 2025: 🔹 Improving access to non-traditional data for the public good From social data for health to youth co-design labs, we explored how privately held and digitally mediated data can responsibly fill critical gaps—especially where official data fall short. 🔹 Advancing the data commons infrastructure for responsible AI Through the New Commons Challenge, we moved from theory to practice—supporting real-world data commons that enable public-interest AI while preserving accountability and shared governance. 🔹 Operationalizing digital self-determination Beyond data sovereignty, we focused on agency - how individuals and communities can meaningfully shape data reuse and AI deployment through social licensing, participatory governance, and accountable intermediaries. 🔹 Developing a new science of questions Data and AI remain too supply-driven. Through Q-Lab, the 100 Questions Initiative, and new work on women’s health innovation, we advanced inquiry as infrastructure—aligning data access with real societal priorities. 🧭 Together with the DataTank, we also continued to cultivate a global community of data stewards, through intensive courses, bootcamps, and convenings...building the human capacity needed to steward data responsibly in complex ecosystems. Our review takes stock of a busy and reflective year—and outlines the foundation we hope to build on in 2026. 📖 Read the full review here: https://lnkd.in/ebsGQiQh ➡️ In the next few days, I will share some of the key deliverables and products we produced in 2025. 📩 Interested in collaborating? Send me a message 🙏 Thanks the amazing team and partners for another impactful year! #opendata #2025 #datagovernance #ai #artificialintelligence
-
#AI | #Blockchain : MahaAgri-AI Policy 2025-2029 . The key objectives that the department of Agriculture seeks to achieve through this policy are : 1. Develop and deploy a statewide food traceability and quality certification platform as part of #DPI : Establish a digitally integrated platform that ensures end-to-end traceability of agricultural produce and enables verification of food quality through credible government backed and internationally recognised certifications. Leveraging AI, blockchain, QR codes, and #IoT, the platform will enhance transparency, support compliance with national and international standards, and improve market access for farmers and producer collectives. 2. Promote Farmer Centric Design and Adoption: Ensure farmers are co-creators in AI solution design by enabling participatory model development, multilingual advisory delivery, and community-based piloting mechanisms 3. Deploy Remote Sensing-Based Engine as a Shared Digital Public Good for the state: Deploy a unified, AI-enabled Remote Sensing Intelligence Engine to serve as a shared digital public good across multiple departments. This engine will process satellite imagery, drone feeds, and GIS datasets to generate high-resolution insights on land use, crop health, water availability, soil moisture, vegetation indices, and disaster risk. 4. Build Digital Public Infrastructure for Agriculture (DPI-A): Operationalize the Agriculture Data Exchange (ADeX), expand weather and soil sensor networks, and integrate with platforms such as Agristack and MahaAgriTech to support AI readiness 5. Mainstream GenAI and Emerging technology across #Agriculture value chain: Deploy context-specific GenAI and emerging technology enabled tools for crop planning, disease and pest prediction, irrigation management, supply chain optimization, post harvest handling, and market access.
-
As climate change accelerates, policymakers and researchers need immediate access to accurate, science-based data to inform critical decisions about natural climate solutions and forest conservation efforts. That's why the nonprofit CTrees developed the first global system to monitor, report, and verify (MRV) carbon stocks and land-use activities for every ecosystem on land, delivering critical data needs of policy and markets. In this blog, Aleena Ashary and Jules Marenghi explain how CTrees has used the cash funding and cloud credits from its 2024 Amazon Web Services (AWS) Imagine Grant to enhance the organization’s flagship Jurisdictional MRV (JMRV) tool. This free, open data platform provides precise annual measurements of carbon stocks, forest area, emissions, and land use activities—revolutionizing how governments and organizations track climate policy progress and develop jurisdictional carbon credit programs. https://lnkd.in/grmttxXD
-
🔍 I've been thinking deeply about what makes data-powered governance truly effective. After some observation and some experience, I've identified three critical ingredients – what I humbly call the "Three D's". 📊 Data Exchange Platforms: The foundation that enables innovation through open data sharing and collaborative models. Estonia's X-Road has revolutionized public services by creating a secure data exchange layer connecting government databases. Citizens can access nearly all government services online, with 99% of public services available digitally. Singapore's Smart Nation Sensor Platform integrates data from sensors and IoT devices across the city to optimize everything from traffic flow to energy consumption. 📜 Data Policies: The essential guardrails that establish trust. The European Union's GDPR has set a global standard for data protection, enhancing citizen trust while creating a framework for responsible innovation. Closer home, the DPDP will start to set benchmarks for data-centric guardrails for a massive, diverse, and data-rich country like India. 🧩 Decision-Support Systems: The mechanisms that transform data into action. South Korea's COVID-19 response leveraged their Epidemic Investigation Support System to enable rapid contact tracing while maintaining transparency with citizens. Also, New Zealand's Integrated Data Infrastructure connects data across government agencies to inform policy decisions with robust economic analysis, resulting in more targeted and effective social programs. 💡 When these 3D's are combined deftly by the public-sector, citizen-centric governance becomes the cornerstone for any government. For the scale India operates at, it's a very good opportunity to show the way for the Global South. 🤔 I think we're at that inflection point with the recent announcement of AI Kosha and the DPDP, and they can help safely incubate innovative solutions that will optimize the delivery of government schemes, thereby ensuring timely, targeted assistance for citizens. Thoughts? #DigitalTransformation #PublicSector #Innovation #DataStrategy
-
🌍 A new era of open data has arrived 🌍 On 1 October 2025, European Centre for Medium-Range Weather Forecasts - ECMWF made its entire Real-time Catalogue open to all, under a CC-BY-4.0 licence. This is one of the largest meteorological datasets in the world, now freely accessible for science, innovation and entrepreneurship. This moment feels very much like when Landsat data was opened years ago — a decision that unlocked billions in economic value, empowering entrepreneurs, local governments, and innovators to build solutions that no one had imagined at the time. Now, with open meteorological data: 🔹 Local businesses can create new weather-driven services — from agriculture optimisation and insurance models to logistics and retail planning. 🔹 Entrepreneurs and startups gain access to world-class data to train AI/ML models, develop predictive tools, and build new digital products without prohibitive licensing barriers. 🔹 Local governments can improve urban planning, resilience strategies, and climate adaptation measures by tapping into global-scale forecasts at local resolution. 🔹 Communities worldwide benefit from better preparedness, aligning with the UN’s Early Warnings for All initiative — protecting lives and livelihoods. Innovation often begins when barriers to data fall away. With ECMWF opening the gates, we can expect new industries, smarter decisions, and stronger climate resilience to emerge — just as we saw with the Landsat revolution. 💡 The question is: who will be the first to harness this opportunity and turn open forecasts into open futures? https://lnkd.in/e5SEt-dP #OpenData #ClimateResilience #Innovation #Entrepreneurship #WeatherData #ECMWF #AI #Geospatial
-
Developing and emerging nations in the Global South must enhance the transparency and accessibility of climate change and other interconnected data. This is crucial because it enables them to make informed decisions and take appropriate action to address the challenges posed by these variables. Currently, adaptations are granted at face value. The funding is awarded to those with the most appealing policies that resonate with the sponsor. Typically, vulnerability, risk, and cost projection datasets are used to determine project feasibility. The lack of such data makes it challenging to understand the cost-effectiveness fully and attribute the impacts of projects. Developing nations should enhance data availability to improve the translation of paper money into concrete actions. This is possible by continuously gathering and storing datasets on climate scenarios, future predictions, and climate investments on an open-source platform, which will also increase accessibility. Access to data is a significant issue in Africa, where obtaining free datasets can be time-consuming, even at climate centres. Accessibility to data should not only be available to foreign investors alone but also to local people and private organisations. Providing access to data to local people will encourage the development of interventions informed by concrete and research-based datasets. Locally led initiatives are often developed without such a backing. Improving the accessibility and transparency of data for the private sector will increase their buy-in to invest, as risks can be better informed through conventional approaches. This will lead to a better use of resources, as gaps in the data will be identified after investing. Additionally, states should encourage the cross-sector and cross-organisation sharing of datasets to increase efficacy by merging resources to address common issues and avoid overlapping roles and responsibilities that the same dataset in multiple organisations can address.
-
In my previous post, I explored the hidden costs of data silos. Today, I want to share practical steps that deliver value without requiring immediate organisational restructuring or technology overhauls. The journey from siloed to integrated data follows a maturity curve, beginning with quick wins and progressing toward more substantial transformation. For immediate progress: 1) Identify your "golden datasets": Focus on the 20% of data driving 80% of decisions. Prioritise customer, product, and financial datasets that cross departmental boundaries. 2) Create a simple business glossary: Document how terms differ across departments. When Finance defines "revenue" differently than Sales, capturing both definitions creates transparency without forcing uniformity. 3) Implement read-only integration patterns: Establish one-way flows where analytics platforms access source data without disrupting existing systems. These connections create cross-silo visibility with minimal risk. 4) Build a culture of trust: Reward cross-departmental collaboration. Create incentives that make data sharing a path to recognition rather than a threat to influence or expertise. 5) Establish cross-functional data forums: Host regular meetings where data users share challenges and use cases, building relationships while identifying practical integration opportunities. As these initiatives gain traction, organisations can advance to more substantial approaches: 6) Match your approach to complexity: Smaller organisations often succeed with centralised data management, while larger enterprises typically require domain-centric strategies. 7) Apply bounded contexts: Map where business domains have distinct needs and terminology, creating clear translation points between areas like Sales, Finance, and Operations. 8) Adopt a data product mindset: Designate product owners for critical datasets who treat data as a product with clear consumers and quality standards rather than simply an asset to be stored. 9) Develop a federated metadata approach: Catalogue not just what exists, but how data relates across domains, making relationships between siloed systems explicit. 10) Maintain disciplined data modelling: Well-structured data within domains makes integration between them far more manageable, regardless of your architectural approach. This stepped approach delivers immediate value while building momentum for more sophisticated strategies. The most successful organisations pair technical solutions with cultural transformation, recognising that effective data integration is ultimately about people collaborating across boundaries. In my next post, I'll explore how governance models evolve with data integration maturity. What approaches have you found most effective in addressing data silos? #DataStrategy #DataCulture #DataGovernance #Innovation #Management