Man, there is so much I could write about this post. I am glad that Semantic Layers are really being surfaced these days as an integral part of the data stack. My biggest issue with all the Semantic Layers out there - DAX is superior to MDX. Like it's not even close. I can create magic with my DAX code. And when I have so called experts on MDX in the room, and I ask them to do what I consider foundational functions in MDX, something is always slightly off. DAX is tricky, but simple. All that being said, I do wish Microsoft would look at what AtScale is doing with model inheritance, preventing model sprawl, and aggregations. There is so much more work they could be putting into Tabular models that would make every other semantic layer tech comparison not even come close. But, we will get some more copilot instead. Ooof. https://lnkd.in/g8t75dHu
Why DAX is better than MDX for Semantic Layers
More Relevant Posts
-
Most data platforms still store everything in Parquet — and that’s fine for analytics. But our workloads are changing. We’re no longer just running SQL dashboards. We’re serving embeddings, vector search, RAG pipelines, and ML feature retrieval — and those workloads need fast selective access, not full table scans. Parquet wasn’t built for that. Instead of “replacing Parquet,” the smarter move is to layer new, retrieval-optimized formats alongside it — e.g. Lance, Nimble, or Vortex for AI workloads. Keep Parquet for BI. Use Lance/Nimble where retrieval speed actually matters. Evolve the storage layer gradually — not by rewriting the lake. I wrote a breakdown of why this shift is happening, what each format is good at, and how to adopt them without re-architecting everything. 👉 Full blog here: https://lnkd.in/e9wgkZFC #DataEngineering #AIInfrastructure #VectorSearch #Lakehouse #MachineLearning #DataArchitecture COE LinkedIn -https://lnkd.in/ehyTcEyJ
To view or add a comment, sign in
-
🔥 Real-time just met simplicity in Microsoft Fabric! The new Eventhouse Endpoint for Lakehouse is here and it’s a real game-changer. 💪 Imagine being able to query your Lakehouse tables instantly, with high performance even on massive datasets, and do it all using KQL or SQL. Now, that’s real-time intelligence made practical. Here’s what makes it so powerful: ⚡ Instant schema sync → your Lakehouse tables are reflected within seconds, no manual setup needed. 🚀 Optimized analytics → fast, scalable queries with full Query Acceleration Policy (QAP) support. 🧠 Advanced insights → time-series, anomaly detection, even Python-based analysis right inside Fabric. 🔗 Unified experience → access both current and future data through a mirrored KQL database view. 🎯 Rich consumption options → think Copilot, dashboards, and NL2KQL-driven exploration. All this, enabled in just one click. Once activated, you’ll see your Eventhouse branch appear automatically → synced, optimized, and ready for deep analytics. To me, this is the perfect bridge between real-time intelligence and AI-driven operations. Microsoft Fabric keeps setting the pace for the future of data platforms. 🚀 #MicrosoftFabric #RealTimeIntelligence #Eventhouse #Lakehouse #DataEngineering #AI
To view or add a comment, sign in
-
-
Out of all the Snowflake Cortex AI features, the one I find most exciting is Snowflake Intelligence. Why? It finally bridges the gap between business users and data in a slick, secure app built by Snowflake. No more waiting on analysts. No SQL required. Just ask 'show me Q4 revenue by product category' in plain English and get instant answers with charts. But, costs can spiral fast if you're not monitoring the right metrics. You need to combine Cortex Analyst + Cortex Search + Warehouse cost to get the full picture. Poor semantic view design can make your warehouse costs explode. The good news is with the right setup and monitoring, you can keep Intelligence accessible AND affordable. 👇
To view or add a comment, sign in
-
-
#Databend evolved into a 𝙪𝙣𝙞𝙛𝙞𝙚𝙙 𝙢𝙪𝙡𝙩𝙞𝙢𝙤𝙙𝙖𝙡 𝙙𝙖𝙩𝙖𝙗𝙖𝙨𝙚 Pure #Rust kernel = blazing performance & safety One Snowflake-compatible SQL interface seamlessly handles: • BI Analytics - Traditional SQL workloads • AI/Vector Search - Embeddings & semantic search • Full-text Search - JSON queries • Geospatial Analytics - Location-based insights All workloads share the same query optimizer & elastic runtime . 🔹 SQL Analytics: https://lnkd.in/gMwwxJjn 🔹 Vector Search: https://lnkd.in/gX6st8W3 🔹 JSON Search: https://lnkd.in/gaWeVqUd 🔹 Geo Analytics: https://lnkd.in/gxAacGbd
To view or add a comment, sign in
-
-
Microsoft Fabric has released major enhancements for Data Agent creators—making it easier to debug, refine, and iterate on intelligent agents that generate SQL from natural language. What it enables: - Run Steps view shows which example queries influenced the final output - Diagnostic Summary provides downloadable traces of agent reasoning steps - SDK now includes evaluatefewshot_examples() to validate NL/SQL pairs - Success and failure cases easily converted to DataFrames for review - Markdown editor for agent instructions improves clarity and structure - Multi-tasking flow lets creators switch between chat and configuration without losing context Why it matters: - Accelerates iteration and improves SQL accuracy - Helps creators diagnose unexpected results and tune examples - Encourages better documentation and clearer agent behavior - Reduces friction when switching between testing and editing Details on the blog: https://lnkd.in/gSqgKECz #MicrosoftFabric #DataAgent #MSFTAdvocate
To view or add a comment, sign in
-
What if I told you how you can 10X your Excel powers, WITHOUT AI! Yes, it's almost a sin as a data engineer, but for most of my quick data exploration and CSV wrangling I used to open up Excel a lot. These days however, the DuckDB UI from MotherDuck is so much faster and powerful that my Excel sits idle gathering dust. If you're interested in my workflow, I'll share how I fetch multiple CSV files in one go and even turn Reddit posts from JSON objects into tables. https://lnkd.in/e2Sq4WPy
To view or add a comment, sign in
-
-
Starburst Teams Up with Snowflake and Industry Leaders to Drive Open Data and AI Interoperability Through the Open Semantic Interchange https://lnkd.in/e9i8UezF Starburst Snowflake #OpenSemanticInterchange #DataInteroperability #SemanticDataModel #AIDataEcosystem #VendorNeutralStandards #Starburst #Snowflake #DataAnalytics #BusinessIntelligence #DataGovernance #DataEngineering #AIInnovation #OpenSourceStandards #EnterpriseDataStrategy #SemanticMetadata #UnifiedMetrics #FutureOfAI #DataPlatformIntegration #LakehouseArchitecture #DataDrivenInsights
To view or add a comment, sign in
-
-
Source: https://lnkd.in/datd3pR3 🚀 AI in Analytics Just Got Real The shift from BI-first to AI-first isn’t just hype—it’s a game-changer. ClickHouse’s DWAINE uses Claude 4.0 + MCP servers to let non-technical users ask “What happened to service X?” and get instant insights 📊. But don’t ditch your BI tools yet—certified metrics still need traditional dashboards ⚖️. 💡 Key takeaways: - MCP standardization avoids vendor lock-in. - Real-time docs cut hallucinations. - AI handles 70% of queries, but critical decisions require SQL validation. #DataAnalytics #AIIntegration #BusinessIntelligence
To view or add a comment, sign in
-
-
🔥 𝐃𝐚𝐭𝐚 𝐀𝐠𝐞𝐧𝐭 𝐢𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐦𝐲 𝐠𝐨-𝐭𝐨 𝐝𝐞𝐛𝐮𝐠𝐠𝐢𝐧𝐠 𝐩𝐚𝐫𝐭𝐧𝐞𝐫🔥 Okay, so I've spent the last few weeks messing around with Fabric's Data Agent. Didn't expect much at first, but it's actually fixing stuff that used to eat up my entire morning. Here's the deal: You know that feeling when a query just won't run fast enough? You're sitting there, staring at execution plans, trying to figure out what's wrong. Data Agent takes one look and goes "you're scanning the entire table, maybe index this column?" Sounds simple, but it's caught things I completely missed. Last Tuesday, I had this messy pipeline to build. Tons of transformations, lots of moving parts. Normally I'd be switching between Stack Overflow and documentation for hours. Instead, I just explained what I needed out loud. The agent gave me a starting point. Not perfect, but good enough that I could shape it into what I actually wanted. Finished by lunch instead of staying late. 𝐖𝐡𝐚𝐭 𝐈 𝐝𝐢𝐝𝐧'𝐭 𝐬𝐞𝐞 𝐜𝐨𝐦𝐢𝐧𝐠: The Fabric-specific stuff has been surprisingly helpful. I've been using the platform for months, but there's always those little things - when should I actually use a lakehouse? How does partitioning work here versus other systems? The agent just explains it when questions come up. Feels less like reading a manual and more like asking someone who knows. Also, I'm not drowning in browser tabs anymore trying to find that one example from the docs. 𝐁𝐞𝐢𝐧𝐠 𝐡𝐨𝐧𝐞𝐬𝐭 𝐡𝐞𝐫𝐞: I don't just copy-paste what it gives me. That'd be a mistake. Everything gets reviewed, tested, tweaked. But having something handle the grunt work means I'm actually thinking about the bigger picture instead of fighting with commas and brackets. It's not doing my job. It's just making the annoying parts less annoying. Anyone else finding AI tools actually useful day-to-day? Or is it just me? #DataEngineering #MicrosoftFabric #DataAgent #Azure #AI #PowerBI #DataPipelines #CloudEngineering #TechLife #DataWork #FabricPlatform #PowerBIDeveloper #BusinessIntelligence #DataAnalytics #PowerBICommunity
To view or add a comment, sign in
Completely agree. DAX is powerful and elegant once mastered.