Brilliant post by Jessica Talisman on the difference between semantic layers and ontologies. The frame couldn't be clearer: - Semantic layers solved for metric governance in a world where users interacted with dashboards - Ontologies provide a structured representation of knowledge where AI systems can access context and make inferences Full article here: https://lnkd.in/eBDyMUfa
Semantic Layers vs Ontologies: Governance and AI Context
More Relevant Posts
-
To move from generic AI to enterprise-grade results, we must bridge the gap between raw data and machine reasoning. Recent insights from metadata experts https://lnkd.in/g9XD7BV2 highlight that while LLMs are powerful, they remain context-blind. At Oceania AI, we believe strategic advantage comes from grounding technology in business reality. The path to responsible progress involves three pillars of context: 1. Ontologies (The Logic): A formal roadmap of business concepts and rules that prevents AI from hallucinating business logic. 2. Semantic Layers (The Translation): This layer ensures the AI understands that technical data columns refer to specific business concepts, like a "Customer". 3. Context Graphs (The Reality): A live version of your map that captures real-time relationships, such as policy applications and transaction approvals. Organisations can move from probabilistic guessing to deterministic business rules by following these steps: 1. Define Domain Vocabulary: Identify your entities (nouns) and relationships (verbs) to create axioms the AI must follow. 2. Context Injection via RAG: Use your ontology to guide Retrieval-Augmented Generation (GraphRAG), ensuring the AI finds related concepts rather than just matching text. 3. Activate Metadata: Use protocols like the Model Context Protocol (MCP) to allow AI agents to verify facts against your ontology in real-time. 4. Automated Mapping: Link your technical assets to ontological concepts so AI knows exactly what is the source of truth. By providing a structured "brain" for your AI to consult, you reduce risk and ensure outputs are grounded in your specific operational reality. #ResponsibleAI #AIGovernance #OceaniaAI
To view or add a comment, sign in
-
AI ready design best practices for Power BI semantic models make analytics more accessible: ask questions in plain language and get trusted, context aware answers with data agents
✨𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀 𝘀𝘁𝗮𝗿𝘁𝘀 𝘄𝗶𝘁𝗵 𝘀𝘁𝗿𝗼𝗻𝗴 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀✨ Many of you use 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀 𝗮𝘀 𝗮 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲 𝗶𝗻 𝗙𝗮𝗯𝗿𝗶𝗰 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀. With the 𝗻𝗲𝘄 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 on how to prepare 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗺𝗼𝗱𝗲𝗹𝘀, you can get 𝗯𝗲𝘁𝘁𝗲𝗿 𝗮𝗻𝗱 𝗳𝗮𝘀𝘁𝗲𝗿 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 from Fabric Data Agents! Sandeep Pawar 👏 📘 𝗟𝗶𝗻𝗸: https://lnkd.in/gXDfHAai #MicrosoftFabric #DataAgents #SemanticModels #PowerBI #AI
To view or add a comment, sign in
-
Preparing your semantic models for data agents requires better quality and adherence to best practices. At Fabric February, I'll be presenting a practical session on how to validate and automatically fix best practice violations in your semantic models. You'll learn: - How and when to apply best practice rules in Tabular Editor and Semantic Link Labs - Key considerations when using both tools together - How to incorporate scripts to automate validation in your development process for continuous quality assurance You'll leave with ready-to-use scripts and actionable knowledge to automate your semantic model development with quality. # FabFeb
✨𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀 𝘀𝘁𝗮𝗿𝘁𝘀 𝘄𝗶𝘁𝗵 𝘀𝘁𝗿𝗼𝗻𝗴 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀✨ Many of you use 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀 𝗮𝘀 𝗮 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲 𝗶𝗻 𝗙𝗮𝗯𝗿𝗶𝗰 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀. With the 𝗻𝗲𝘄 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 on how to prepare 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗺𝗼𝗱𝗲𝗹𝘀, you can get 𝗯𝗲𝘁𝘁𝗲𝗿 𝗮𝗻𝗱 𝗳𝗮𝘀𝘁𝗲𝗿 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 from Fabric Data Agents! Sandeep Pawar 👏 📘 𝗟𝗶𝗻𝗸: https://lnkd.in/gXDfHAai #MicrosoftFabric #DataAgents #SemanticModels #PowerBI #AI
To view or add a comment, sign in
-
Are you a #Fabric or #PowerBI developer building data agents on top of your existing semantic models? If so, you #must check this one! 👇 https://lnkd.in/ddMUmtfx Let's face it adding an agent is easy. Making it accurate and relevant is the hard part. Sandeep Pawar shares loads of tips and hints that can help you. Have a look and share with us any relevant feedback! #AI #Agent #Microsft #DataAIReady
✨𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀 𝘀𝘁𝗮𝗿𝘁𝘀 𝘄𝗶𝘁𝗵 𝘀𝘁𝗿𝗼𝗻𝗴 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀✨ Many of you use 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀 𝗮𝘀 𝗮 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲 𝗶𝗻 𝗙𝗮𝗯𝗿𝗶𝗰 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀. With the 𝗻𝗲𝘄 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 on how to prepare 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗺𝗼𝗱𝗲𝗹𝘀, you can get 𝗯𝗲𝘁𝘁𝗲𝗿 𝗮𝗻𝗱 𝗳𝗮𝘀𝘁𝗲𝗿 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 from Fabric Data Agents! Sandeep Pawar 👏 📘 𝗟𝗶𝗻𝗸: https://lnkd.in/gXDfHAai #MicrosoftFabric #DataAgents #SemanticModels #PowerBI #AI
To view or add a comment, sign in
-
Great article on how to optimize Sematic models for AI and chat with your data across Power BI and Data Agents. Shoutout to Sandeep Pawar for capturing the key patterns so clearly 🚀 This reflects a lot of the foundational work my team has been building to make these experiences accurate, scalable, and reliable. https://lnkd.in/gJPH-AXW #PowerBI #DataAgents
✨𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀 𝘀𝘁𝗮𝗿𝘁𝘀 𝘄𝗶𝘁𝗵 𝘀𝘁𝗿𝗼𝗻𝗴 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀✨ Many of you use 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗠𝗼𝗱𝗲𝗹𝘀 𝗮𝘀 𝗮 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲 𝗶𝗻 𝗙𝗮𝗯𝗿𝗶𝗰 𝗗𝗮𝘁𝗮 𝗔𝗴𝗲𝗻𝘁𝘀. With the 𝗻𝗲𝘄 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 on how to prepare 𝗣𝗼𝘄𝗲𝗿 𝗕𝗜 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗺𝗼𝗱𝗲𝗹𝘀, you can get 𝗯𝗲𝘁𝘁𝗲𝗿 𝗮𝗻𝗱 𝗳𝗮𝘀𝘁𝗲𝗿 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 from Fabric Data Agents! Sandeep Pawar 👏 📘 𝗟𝗶𝗻𝗸: https://lnkd.in/gXDfHAai #MicrosoftFabric #DataAgents #SemanticModels #PowerBI #AI
To view or add a comment, sign in
-
The new fleet of analytics agents is reshaping data driven decision making with speed automation and real time insights. #ExpertsCloud #AI #Analytics #AIDriven #BusinessIntelligence #DataDriven #FutureOfTech #Automation https://lnkd.in/drij5PkC
To view or add a comment, sign in
-
𝗥𝗲𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗠𝗮𝘁𝘂𝗿𝗶𝘁𝘆: 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 𝗗𝗮𝘁𝗮 𝘃𝘀. 𝗔𝗜-𝗥𝗲𝗮𝗱𝘆 𝗗𝗮𝘁𝗮 Viewing "𝘥𝘢𝘵𝘢 𝘮𝘢𝘵𝘶𝘳𝘪𝘵𝘺" as a single ladder might be one of the most expensive mistakes we're making with AI. I now see it as two independent axes pulling data in opposite directions. I've observed a hidden assumption: if dashboards are trusted and metrics governed, we must be close to AI-ready. But treating AI as just "the next rung" on the analytics ladder quietly blocks progress. I've stopped scoring data maturity as one number. Instead, I think in terms of two distinct paths: 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 𝗺𝗮𝘁𝘂𝗿𝗶𝘁𝘆 is about interpretability ... the journey from fragmented reports to stable metrics, clear definitions, and governed dashboards. We compress reality into something leadership can scan, debate, and defend. 𝗔𝗜 𝗺𝗮𝘁𝘂𝗿𝗶𝘁𝘆 is about world-modeling. Here I need rich context, atomic events, explicit relationships, timeliness, and grounding so systems can decide what happens next. Instead of hiding complexity, I expose it: edge cases, anomalies, rare events, and messy intent signals that help models build robust internal pictures. That's why I no longer try to "upgrade" analytics data into AI data. Analytics-ready data is shaped after the fact, a by-product of human decision systems. AI-ready data must be captured before we summarize: events, context, and semantics preserved at source. Retrofitting AI onto presentation layers asks models to reconstruct reality from dashboard residue. Mapping this as an 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 × 𝗔𝗜 𝗠𝗮𝘁𝘂𝗿𝗶𝘁𝘆 𝗠𝗮𝘁𝗿𝗶𝘅 reveals clear patterns: • 𝗛𝗶𝗴𝗵 𝗮𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀, 𝗹𝗼𝘄 𝗔𝗜 = The Dashboard Trap: beautiful, trusted charts on data aggregated so tightly that nuance is squeezed out. • 𝗟𝗼𝘄 𝗮𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀, 𝗵𝗶𝗴𝗵 𝗔𝗜 = The Black Box Lab: impressive models nobody can explain when the board asks, "Why trust this?" The goal is the 𝘁𝗼𝗽-𝗿𝗶𝗴𝗵𝘁 𝗾𝘂𝗮𝗱𝗿𝗮𝗻𝘁: Analytics compresses reality into stable narratives humans trust. AI expands reality into possibilities machines reason over. They don't share pipelines, but they share underlying truth: accurate events, preserved meaning, clear connections to real-world behavior. This reframe changes my questions from "How mature is our data?" to • "𝗙𝗼𝗿 𝘄𝗵𝗼𝗺 𝗮𝗿𝗲 𝘄𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴: 𝗵𝘂𝗺𝗮𝗻𝘀 𝗼𝗿 𝗺𝗮𝗰𝗵𝗶𝗻𝗲𝘀? • "𝗔𝗹𝗼𝗻𝗴 𝘄𝗵𝗶𝗰𝗵 𝗮𝘅𝗶𝘀 𝗮𝗿𝗲 𝘄𝗲 𝗺𝗼𝘃𝗶𝗻𝗴?" Once those answers are explicit, architecture, investments, and trade-offs fall into place.
To view or add a comment, sign in
-
-
According to TechTarget, organizations need to reshape architectures and strategies to support scalable AI. Essential for real-time insights. Read more 👉
To view or add a comment, sign in
-
How to use AI to elevate the purpose and productivity of Data Science teams A hard truth about data science work: a lot of expert time gets spent on non-expert tasks. Recent surveys quantify it: In CrowdFlower’s 2016 report, data scientists reported spending roughly 80% of their time building and refining datasets before they even get to pattern-finding or modeling. In Anaconda’s 2022 State of Data Science survey, respondents report spending less than 30% of their time on model building and deploying tasks. That gap matters—not just for efficiency, but for purpose. Make no mistake, cultivating quality datasets is incredibly important to data science work. However, most data scientists didn’t choose this field to spend the majority of their week cleaning and refining the data. They’re uniquely qualified to do higher-leverage work: framing the problem, selecting the right analytic approach, interpreting results, communicating tradeoffs, and driving decisions. This is where AI can be deployed “for good.” 1) Automate the drudgery (without losing rigor). AI copilots and agentic workflows can accelerate the grind associated with data discovery, schema mapping, quality checks, transformation code, documentation, and reusable pipelines. Data scientists can play a supervisory role in this process - verifying the outputs and refining when necessary. 2) Elevate the team’s work to judgment + strategy. When preparation becomes faster and more standardized, teams spend more time on what actually creates value: - hypothesis generation - experiment design - causal reasoning / forecasting strategy - stakeholder alignment - shaping decision narratives that drive action **Important** - We need to be careful not to automate away tasks that are essential to learning and skill building! AI can speed up modeling as well, but junior data scientists still need reps building and validating models end-to-end. That’s how they learn failure modes, leakage, evaluation pitfalls, and how to debug confidently. A strong pattern is: - Junior DS: validate and refine data inputs after the AI takes a first pass. Then, build task-specific models and be the go-to knowledge experts at this level for model performance and vulnerabilities. - Senior DS: use AI to orchestrate broader solutions, enforce standards, and translate results into decisions If we implement AI this way, we get the best outcome: more productive teams and more meaningful work. AI that amplifies human capability rather than hollowing it out. AI deployed for good = less wrangling, more insight, more purpose. #AIForGood, #DataScience, #ProfessionalPurpose, #ProfessionalDevelopment
To view or add a comment, sign in
-
-
Russ's take on leveraging AI to increasing productivity while also enhancing data science professionals' sense of purpose. This is a key imperative for professional development in 2026.
How to use AI to elevate the purpose and productivity of Data Science teams A hard truth about data science work: a lot of expert time gets spent on non-expert tasks. Recent surveys quantify it: In CrowdFlower’s 2016 report, data scientists reported spending roughly 80% of their time building and refining datasets before they even get to pattern-finding or modeling. In Anaconda’s 2022 State of Data Science survey, respondents report spending less than 30% of their time on model building and deploying tasks. That gap matters—not just for efficiency, but for purpose. Make no mistake, cultivating quality datasets is incredibly important to data science work. However, most data scientists didn’t choose this field to spend the majority of their week cleaning and refining the data. They’re uniquely qualified to do higher-leverage work: framing the problem, selecting the right analytic approach, interpreting results, communicating tradeoffs, and driving decisions. This is where AI can be deployed “for good.” 1) Automate the drudgery (without losing rigor). AI copilots and agentic workflows can accelerate the grind associated with data discovery, schema mapping, quality checks, transformation code, documentation, and reusable pipelines. Data scientists can play a supervisory role in this process - verifying the outputs and refining when necessary. 2) Elevate the team’s work to judgment + strategy. When preparation becomes faster and more standardized, teams spend more time on what actually creates value: - hypothesis generation - experiment design - causal reasoning / forecasting strategy - stakeholder alignment - shaping decision narratives that drive action **Important** - We need to be careful not to automate away tasks that are essential to learning and skill building! AI can speed up modeling as well, but junior data scientists still need reps building and validating models end-to-end. That’s how they learn failure modes, leakage, evaluation pitfalls, and how to debug confidently. A strong pattern is: - Junior DS: validate and refine data inputs after the AI takes a first pass. Then, build task-specific models and be the go-to knowledge experts at this level for model performance and vulnerabilities. - Senior DS: use AI to orchestrate broader solutions, enforce standards, and translate results into decisions If we implement AI this way, we get the best outcome: more productive teams and more meaningful work. AI that amplifies human capability rather than hollowing it out. AI deployed for good = less wrangling, more insight, more purpose. #AIForGood, #DataScience, #ProfessionalPurpose, #ProfessionalDevelopment
To view or add a comment, sign in
-