The Unity Catalog REST API makes #metadata management for your data #lakehouse simple, scalable, and reproducible. Automate metadata tasks and integrate with your workflows to unlock streamlined data management. Dive into our latest blog to learn how to take advantage of custom functionality and build production-grade integrations! 👇 🔗 https://lnkd.in/ewjiY6cs #unitycatalog #opensource #oss #restapi
How to use Unity Catalog REST API for metadata management
More Relevant Posts
-
Check out new blog post on Announcing a new Fabric REST API for connection binding of semantic models We are thrilled to announce the release of a new Fabric REST API for configuring connection bindings for semantic models. A connection binding defines what data connection a semantic model will use to connect to an underlying data source. #PowerPlatform #PowerBI
To view or add a comment, sign in
-
We are thrilled to announce the release of a new Fabric REST API for configuring connection bindings for semantic models. A connection binding defines what data connection a semantic model will use to connect to an underlying data source.
To view or add a comment, sign in
-
Understanding GraphQL Like a Pro — Your Ultimate API Guide! Are you tired of juggling multiple REST endpoints and fetching more data than you need? Say hello to GraphQL, the modern API technology that empowers you to request exactly the data you want — no more, no less — all in a single request. In my latest blog, I break down GraphQL’s core concepts with easy-to-follow examples straight from real-world projects: 🔹 Schema Basics — The blueprint that defines your API's data and relationships 🔹 Efficient Queries — Fetch nested and related data effortlessly 🔹 Data Mutations — Update and modify data with precision 🔹 Live Subscriptions — Take your apps real-time with powerful event streaming 🔹 API Introspection — Discover how GraphQL knows itself and powers smart tools Whether you're a beginner or looking to sharpen your GraphQL skills, this comprehensive guide equips you to build scalable, flexible, and efficient APIs. 👉 https://lnkd.in/g285N6gn Let's build smarter APIs together. Feel free to share your thoughts or questions in the comments! #GraphQL #APIDevelopment #WebDevelopment #BackendTech #TechBlog
To view or add a comment, sign in
-
In most large systems, background processes start life as tactical fixes - a quick script to sync data, clean records, or trigger downstream updates. Over time, they multiply into hundreds of independent jobs, each with its own repo, scheduler, and dependency graph. We’ve been experimenting with collapsing that complexity into a rules engine - a single platform that evaluates conditional logic across data streams and schedules. At its core: - React UI for managing jobs and rules, backed by a Node.js service using json-rules-engine. - Rules are expressed as declarative JSON - each job defines a trigger (schedule or event), a scope (dataset or API feed), and an action (e.g. API update, notification, DB write). - The backend abstracts data ingestion via connectors - REST, ODS feeds, message queues - and persists rules, execution logs, and metadata. Architecturally, it’s fully stateless. Every execution cycle ingests data, evaluates rules, and emits events. Scaling horizontally is just a matter of spawning more evaluators off a queue. The idea isn’t new - but applying it to enterprise operations at scale turns “scripting chaos” into declarative infrastructure. Once logic is externalised, you can version it, test it, and see it - something traditional background jobs rarely offer.
To view or add a comment, sign in
-
-
API versioning gone wild? Here’s a tip that might save you from big headaches later. 👇 🚧 API versioning will hurt you sooner than you think. It looks easy: just slap a /v2 in the URL and call it a day. But then reality hits: - Multiple versions need to coexist - Business rules must be duplicated - Consumers need migration paths It gets messy, fast. 👉 Instead of juggling multiple versions, design your API to be resilient to change. A simple tip for REST APIs: always wrap requests and responses in objects ({}), never return plain values or arrays ([]). (see image) Why? Because objects can grow. You can add fields later without breaking existing consumers. #AllPhi
To view or add a comment, sign in
-
-
🧩 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝘁𝗮𝗹𝗸𝘀 𝗮𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗠𝗼𝗱𝗲𝗿𝗻 𝗗𝗮𝘁𝗮 𝗦𝘁𝗮𝗰𝗸 - 𝗯𝘂𝘁 𝘄𝗵𝗮𝘁 𝗶𝗳 𝘄𝗲 𝘀𝘁𝗿𝗶𝗽𝗽𝗲𝗱 𝗶𝘁 𝗱𝗼𝘄𝗻 𝘁𝗼 𝘁𝗵𝗲 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹𝘀? I’ve been experimenting with a modern, open-source data architecture that focuses on 𝘀𝗽𝗲𝗲𝗱, 𝘀𝗶𝗺𝗽𝗹𝗶𝗰𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗺𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - without the orchestration bloat or tool sprawl. The idea: keep it clean and composable, while ensuring strong guarantees for data quality, observability, and analytical performance. Here’s the current setup: ⚙️ Polars 🐻❄️ → high-performance ETL & transformation engine (Rust under the hood, Python-native feel) 🧪 #𝗗𝗮𝘁𝗮𝗳𝗿𝗮𝗺𝗲𝗹𝘆 → schema and data validation layer, with schemas exportable to SQLAlchemy models and Apache Arrow for seamless downstream integration ⚡ QuestDB → time-series layer for fast ingestion and sub-second queries 🧱 Delta Lake (via Polars + #delta-rs) → open table format for analytics & long-term storage 📈 Grafana Labs (with QuestDB plugin) → real-time visualization and monitoring 🔄 𝗤𝘂𝗲𝘀𝘁𝗗𝗕 → 𝗗𝗲𝗹𝘁𝗮 𝗲𝘅𝗽𝗼𝗿𝘁𝘀 every few minutes for analytical freshness, with 𝗿𝗲𝗴𝘂𝗹𝗮𝗿 𝗰𝗼𝗺𝗽𝗮𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘃𝗮𝗰𝘂𝘂𝗺𝗶𝗻𝗴 to avoid the small-file problem This stack is intentionally 𝗹��𝗮𝗻 - each component does one thing exceptionally well, and everything connects through open standards (Arrow, Parquet, Delta). Some might notice there’s no dedicated 𝘪𝘯𝘨𝘦𝘴𝘵𝘪𝘰𝘯 layer - that’s by design. The focus here is on the 𝗽𝗼𝘀𝘁-𝗶𝗻𝗴𝗲𝘀𝘁 phase: once data is already in-house, how do we 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺, 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲, 𝘀𝘁𝗼𝗿𝗲, 𝗮𝗻𝗱 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗲 it in a way that’s both efficient and transparent? 👉 I’d love to get feedback from the community: 𝗪𝗵𝗮𝘁 𝘄𝗼𝘂𝗹𝗱 𝘆𝗼𝘂 𝗮𝗱𝗱 (𝗼𝗿 𝗿𝗲𝗺𝗼𝘃𝗲) 𝗳𝗿𝗼𝗺 𝘁𝗵𝗶𝘀 𝘀𝘁𝗮𝗰𝗸 - 𝗸𝗲𝗲𝗽𝗶𝗻𝗴 𝗶𝘁 𝗼𝗽𝗲𝗻, 𝗰𝗹𝗲𝗮𝗻, 𝗮𝗻𝗱 𝗹𝗶𝗴𝗵𝘁𝘄𝗲𝗶𝗴𝗵𝘁? Always curious to learn from other OSS data practitioners. 💬
To view or add a comment, sign in
-
We are thrilled to announce the release of a new Fabric REST API for configuring connection bindings for semantic models. A connection binding defines what data connection a semantic model will use to connect to an underlying data source. #MicrosoftFabric #PowerBI #MSFTAdvocate
To view or add a comment, sign in
-
Ever get tired of the 'all-or-nothing' approach with traditional APIs? That's the core problem GraphQL solves. For years, we were stuck with REST, which is great, but often meant over-fetching or under-fetching data. When you need just a username and an avatar, why does the API send back 50 fields of user history? It was inefficient, especially for mobile clients. GraphQL is a paradigm shift. It's a query language for your API, and a server-side runtime for executing those queries. The key idea is that the client dictates the data structure. Instead of multiple fixed endpoints, you expose a single, powerful endpoint. The client sends a specific request detailing exactly what it needs. How I saw the improvement in practice: A few months ago, we refactored a client-facing dashboard from a series of REST calls to a single GraphQL query. The old system required 3 separate HTTP requests to load the main page: /users/, /orders/, and /analytics/summary. The average time-to-load was ≈1.8 seconds. With GraphQL, the client sent one request and received a consolidated, tailored JSON object. We saw two major performance wins: Request Count: Dropped from 3 to 1. Less network overhead. Response Size: Decreased by an average of 65% because we eliminated all the unused fields. This single refactor brought the dashboard's average load time down to ≈600 milliseconds. That's a huge difference in user experience. Thinking in a Strongly Typed Schema The real intellectual benefit of GraphQL is the Strongly Typed Schema. This is the API's contract. It forces both the frontend and backend teams to define the exact data model upfront. This upfront clarity is invaluable for development speed and preventing runtime errors. A quick mental checklist for adoption: Define your Type (e.g., User, Product). Map your Query fields to resolvers (how to fetch the data). Use Mutation for all write operations. Implement DataLoader to solve the N+1 problem (Crucial for performance). Utilize Subscriptions for real-time data needs. While the server setup is more complex than a basic REST endpoint, the development speed and performance gains on the client side—especially for complex, data-heavy applications—make it a powerful trade-off. It shifts the burden of efficiency from the backend to a collaborative schema design.
To view or add a comment, sign in
-
-
https://lnkd.in/e9QXD2xn The timeline of how we got this dumb: • 1976: IBM invents information_schema. Databases become self-describing. DDL generation: solved. Forever. Done. Move on. • 1986: SQL standard: SHOW CREATE TABLE returns generated code. Every database throws away your SQL and keeps metadata. This should have been a hint. • 2006: AWS launches. “Infrastructure as Code” is brilliant for stateless servers. Some galaxy brain applies it to stateful databases that ALREADY GENERATE THEIR OWN CODE. Nobody stops them. • 2013: GitHub wins. “If it’s not in git, it doesn’t exist.” Cargo cult begins. Databases politely continue not caring about your git repo. • 2016: dbt launches with “Analytics as Code.” Finally, a way to store 1,000 files with 95% identical patterns! Your database extracts the 5% that matters and THROWS THE REST AWAY. You celebrate this as “modern data stack.” • 2020: AI trained on GitHub. Sees millions of handcrafted SQL files. Learns: “humans write SQL files.” Doesn’t see: every database engine generating SQL from metadata for 40 years. This is like training a chef exclusively on microwave dinners. • 2023: ChatGPT writes SQL! Everyone loses their minds. “AI will replace data engineers!” By… generating more of the handcrafted garbage that databases discard? Sure. Great plan. • 2025: You’re in a 2-hour PR review for a schema change that touched 50 files. Meanwhile, your database regenerates all its DDL from metadata in 0.3 seconds whenever you run SHOW CREATE. The irony is completely lost on you.
To view or add a comment, sign in
-
A new Power BI blog has been published: We are thrilled to announce the release of a new Fabric REST API for configuring connection bindings for semantic models. A connection binding defines what data connection a semantic model will use to connect to an underlying data source.
To view or add a comment, sign in