Coverfoto van MotherDuck
MotherDuck

MotherDuck

Gegevensinfrastructuur en -analyse

Infrastructure for answers

Over ons

The data warehouse built for getting answers from your data. Works with AI agents and SQL alike. Built in collaboration with the folks at DuckDB Labs.

Website
https://motherduck.com
Branche
Gegevensinfrastructuur en -analyse
Bedrijfsgrootte
51 - 200 medewerkers
Hoofdkantoor
Seattle
Type
Particuliere onderneming
Opgericht
2022

Locaties

Medewerkers van MotherDuck

Updates

  • 𝗬𝗼𝘂: carefully crafting one dbt model, waiting for your AI assistant to respond. 𝗬𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗼𝗿: running multiple AI agents in parallel across isolated environments, shipping features while you're still typing. That's the approach Jacob Matson is demonstrating live at Datatune 2026. What you'll get from this demo: How to spin up parallel environments instantly ↳(so you can work on multiple features without conflicts) Managing multiple AI agents simultaneously ↳(the setup you can actually replicate) Practical patterns for dbt and dlt workflows ↳(that you can use Monday morning) Where this works and where it doesn't ↳(so you don't waste time on the wrong approach) Join us at DataTune, March 6-7, where 500+ data practitioners come together for two days of technical talks, hands-on training, and networking with people who get your data jokes 👉️ https://datatuneconf.com/

  • What if you could fix June without rebuilding January through December? Sounds funny? Give us a second to explain. We just contributed microbatch support to dbt-duckdb. So, instead of rebuilding an entire table when you find a bug, you reprocess just the time window that's broken. June had bad data? Fix June. The rest stays untouched. One of our best analytics engineers, Dumky de Wilde wrote the deep dive. It's technical, but fun. Covering: 1/ Why benchmarks lie (they measure single runs, not Friday night firefights) 2/ How DuckDB's row groups work differently than partitions And the gotchas we hit along the way. Microbatch isn't always the fastest, but often times "slow is smooth, smooth is fast." When you discover a column was calculated wrong three months ago, you'll thank us. It's live on dbt-duckdb master now. Read Dumky's full breakdown: https://lnkd.in/ghV8TSY3

    • Geen alternatieve tekst opgegeven voor deze afbeelding
  • Stern Risk Partners, LLC was running dbt on a single Heroku Postgres database. Transformations locked their production app for 2 hours and threatened to grow as data sources scaled. A failed run meant starting over from scratch. They migrated to MotherDuck and cut that to 7 minutes. Pete Rafferty, the senior software engineer who led the migration, had years of Snowflake experience but chose MotherDuck for the simplicity. No role-based access management matrices to deal with. Just service account tokens, fast compute, and a support team worth talking about: "The support from the MotherDuck team has been really valuable. I definitely haven't gotten that level of support with cloud data warehouses I've used in the past." Now they've got Omni dashboards connected directly to MotherDuck, embedded analytics in their Rails app, and they're rolling out MCP-powered natural language querying so non-technical teammates can explore the data on their own. Read the full story →

  • Organisatiepagina weergeven voor MotherDuck.

    27.672 volgers

    You know the pattern: someone asks a quick question, you write a query, share the results. A week later, new question. Are we really going to build a thousand dashboards this quarter?? We built something to fix that. Dives are interactive visualizations that your AI agent builds directly in MotherDuck - ask a question in natural language, get a live chart backed by your actual data. Filter, drill down, and share it. Dives are for the long-tail of data questions, the kind everyone wants answered but nobody wants to build a dashboard for. Join us to see how Dives work, how they fit into local-first workflows, and what it looks like to use Claude, ChatGPT, or Cursor as your agent through the MotherDuck MCP Server.

    MotherDuck Dives: From Ad Hoc Questions to Real-Time Answers

    MotherDuck Dives: From Ad Hoc Questions to Real-Time Answers

    www.linkedin.com

  • Have you ever wanted to explore your data in a Microsoft Access 2000 emulator? What if it had Clippy!? Join us tomorrow for a livestream on Dives, our new feature for building composable visualizations in MotherDuck. We'll explore very serious data visualizations, local development workflows with Claude Code, and deploying Dives with CI/DC pipelines. Join us Wednesday at 9a PST / 12p EST! Link in comments.

  • MotherDuck heeft dit gerepost

    Git for data is still underexplored, and it is an area that is changing so fast. That's why I continue to do so in Part 2, where we look at the actual tools and features that showcase how to apply a Git-like workflow for data work. I compared Git-like tools for data I could find, such as LakeFS, Dolt, Nessie, MotherDuck, Neon, Bauplan, and more. They all solve the same problem differently: how to version data without copying petabytes of data. To map the git equivalent for data to its original git command, we can map the data features as such: 1️⃣ CREATE SNAPSHOT → git tag: bookmark a known-good state 2️⃣ CREATE DATABASE ... FROM → git checkout -b: isolated environment from a snapshot 3️⃣ ALTER DATABASE SET SNAPSHOT TO → git reset --hard: roll back to a previous state 4️⃣ UNDROP DATABASE → recovering a deleted branch The toolings can have different categories. I differentiate them into three different approaches. Firstly, we have versioned #DataLake tools that sit between the compute engine and object storage (S3, GCS, Azure Blob), leaving you free to query with whatever engine you prefer: Trino, Spark, DuckDB, etc. These are tools like LakeFS, Nessie, or Bauplan. Then we have #transactional and #OLTP databases. These are row-oriented, ACID-compliant databases where Git-like versioning applies mostly to application data, such as user records, orders, and schemas. Such databases are Supabase, Neon, and Dolt. Lastly, we have #analytical databases or #DataWarehouses. OLAP-style and analytical-style databases optimized for read-heavy analytical queries. Here we have tools such as DuckLake or MotherDuck. All of these have different purposes and implement/support the git workflow a little differently. Check out the full article for more details at: https://lnkd.in/eyzvBHHT And when integrating these practices into your data work, start small. Look at your recent prod incidents and try if you can integrate adding a branch to the affected pipeline, or use another method shown. What do you think, do you believe that git-like workflows for data are becoming table stakes soon, especially in this day and age?

    • Git for Data Applied: Comparing Git-like Tools That Separate Metadata from Data
  • We're going LIVE with DIVES. Join us this Wednesday for a launch livestream exploring Dives: composable visualizations inside MotherDuck. We'll build beautiful charts with natural language, show you how to use Claude Code for a speedy local development workflow, and dabble in building CI/CD pipelines to deploy Dives to your org. WHEN: Wednesday, Feb 25 at 12p EST / 9a PST WHERE: https://luma.com/3h0dprnh

    • Geen alternatieve tekst opgegeven voor deze afbeelding

Vergelijkbare pagina’s

Door vacatures bladeren

Financiering

MotherDuck 3 rondes in totaal

Laatste ronde

Serie B

US$ 52.500.000,00

Bekijk meer informatie over Crunchbase