“Michael has a great blend of data science skills, technology skills, and business sense. He is also very effective in working with business users to translate a loosely defined business problem into a quantitative solution that can be easily consumed and acted upon by key stakeholders. As an example, Michael single-handedly built a text analytics application with a web based front end that was used to analyze social media data and customer feedback. Unlike many of the more general purpose text analytics applications, Michael customized his tool to provide exactly what was needed to analyze automotive data, and it reduced the time typically spent on analysis from weeks to hours. In developing the application, Michael also discovered an issue in an existing data pipeline that was causing metrics (e.g. sentiment) to be reported incorrectly. Using his application, the team was able to quickly find the root cause of the issue and fix it. It was a pleasure to work with him. ”
Sign in to view Michael’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Michael’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York, New York, United States
Sign in to view Michael’s full profile
Michael can introduce you to 10+ people at LinkedIn
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
1K followers
500+ connections
Sign in to view Michael’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Michael
Michael can introduce you to 10+ people at LinkedIn
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Michael
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Michael’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Activity
Sign in to view Michael’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
-
when Claude Code asks my permission to run an arcane 14-flag shell command
when Claude Code asks my permission to run an arcane 14-flag shell command
Liked by Michael Lombard
-
I feel like Dwight Schrute but instead of being the bobblehead, I am the Minifig. I shipped some production code a few weeks ago for the first time…
I feel like Dwight Schrute but instead of being the bobblehead, I am the Minifig. I shipped some production code a few weeks ago for the first time…
Shared by Michael Lombard
-
Míle buíochas do na daltaí agus múinteoirí a labhair liom faoina dtionscnaimh eolaíochta ag an ESB Science Blast an seachtain seo. In a world where…
Míle buíochas do na daltaí agus múinteoirí a labhair liom faoina dtionscnaimh eolaíochta ag an ESB Science Blast an seachtain seo. In a world where…
Liked by Michael Lombard
Experience & Education
-
LinkedIn
****** ******** **** *******
-
********** ** *******
****** ** **** ****** ****** ******** undefined
-
-
***** **** **********
******** ** **** ****** *******
-
View Michael’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Licenses & Certifications
Recommendations received
1 person has recommended Michael
Join now to viewView Michael’s full profile
-
See who you know in common
-
Get introduced
-
Contact Michael directly
Other similar profiles
Explore more posts
-
Brian Kohlmann
Bader Rutter • 3K followers
The Silent Killer in MarTech: Data Debt Everyone talks about tech debt, but very few talk about data debt. Data debt happens when companies accumulate years of fragmented, duplicative, or poorly governed data across platforms. The result? - CDPs and CRMs that don’t talk to each other. - “Single customer view” dashboards that are anything but single. - Teams spending more time cleaning and reconciling data than actually using it. Why this matters: AI, personalization, and analytics are only as good as the foundation beneath them. If your data is riddled with gaps and contradictions, your MarTech stack can’t deliver on its promise. A few ways to start addressing data debt: 1. Audit: Know where your data lives and who owns it. 2. Prioritize: Not all data is worth saving. Focus on what drives customer and business outcomes. 3. Govern: Define rules for collection, storage, and integration before adding another tool. Data debt isn’t just a back-office problem, it’s a growth limiter. I’ve seen organizations pour millions into new MarTech tools, only to end up with fancy dashboards powered by broken data. The stack looks impressive, but the insights can’t be trusted. Fixing data debt doesn’t get the spotlight, but it pays off in every area that matters: more accurate personalization, cleaner reporting, smoother integrations, and ultimately, better customer experiences. Leaders who treat data debt like a strategic priority, not a technical chore, will have a real advantage. When was the last time you audited the data that fuels your MarTech stack, not just the tools themselves? #MarTech #EmergingTech #DataStrategy #DigitalTransformation #MarketingTechnology
9
1 Comment -
Derya Isler
Glasswing Ventures • 9K followers
The best part of being at #Dreamforce is talking to many (MANY!) customers and executives about their biggest challenges in adopting AI agents. The same themes keep surfacing: figuring out where agents actually add value, integrating them into existing workflows, and making sure teams trust the output. My take on it (which has been true for most AI applications for years): 1️⃣ Start with a contained workflow. Don’t “launch an AI agent” . Instead, automate one painful, repetitive process, like summarizing customer calls or drafting support emails. 2️⃣ Focus on clean, connected data before fancy models. Invest in data context. Give your agents access to the right internal docs, CRM notes, and policies and make sure your data is clean, high-quality, and connected. That’s where the real power of agents comes from. 3️⃣ Build trust loops. Let humans review early outputs, collect feedback, and refine prompts and policies before scaling. Focus on the handoff between agents and humans, not replacement. It’s been fascinating to see what’s working (and what’s not) for our customers across many industries. How are you approaching AI agent pilots in your organization? how can we/I help? #salesforce #dreamforce #aileadership #agenticenterprise
112
3 Comments -
Laughlin Rigby
Wheresight • 7K followers
AI & Market Research - Can AI take surveys for us? I haven't posted about too many uses of AI with Market Research to date, however I think this case study should get a mention. A recent study by PyMC Labs and Colgate‑Palmolive finds that large language models (LLMs) can closely simulate human responses in consumer product surveys—hitting ~90% of human test–retest reliability across 57 surveys (9.3k human responses). How it worked: Instead of asking a bot for a number, the team asked for a short written answer with a why. That text was then compared to five Likert “anchor” statements using embedding similarity (their Semantic Similarity Rating / SSR method). Result: 90% of human test-retest reliability. They mirrored distributions of responses across demographic segments such as age and income. Researchers emphasise that this doesn’t mean LLMs think like people, but their outputs can align with human behavioural patterns under defined conditions Want to try this? 1) Start with a low-risk, high-volume concept test you already run. 2) Write clear Likert anchors (1–5) as plain sentences. 3) Prompt an LLM for a short rationale per concept and then map that text to anchors using embedding similarity. 4) Create a small human holdout to benchmark; only proceed if your synthetic vs human gap is tight. 5) Be transparent: label synthetic vs human, document prompts, and keep humans in the loop for final calls. Imagine being able to test new tourism or retail products or ad concepts instantly across synthetic “audiences”, before spending on full panels. Huge potential for insight and speed — as long as we keep the human layer where it matters most: judgement, context, and creativity. This isn’t about replacing people; it’s more about "simulate → iterate → validate" so teams can test more ideas, faster, and spend human budget where judgement matters most. Full story below 👇 #AI #MarketResearch #Insights #DigitalTransformation #SyntheticData #Tourism #Innovation https://lnkd.in/eRBWtRcs
6
1 Comment -
Justin Norris
360Learning • 10K followers
I don't think AI has killed SaaS (yet) but I do wonder if it's killed "value-based pricing" I've been doing a lot of software shopping the last month, particularly in the conversational AI space, where there's a Cambrian explosion of vendors figuring out business models in real time. You can find it all: conversation-based, outcome-based, platform + usage-based, and tier-based tied to some specific platform feature. Overall, "value-based" is the framing I'm most allergic to these days. Time was, a vendor could win with a high sticker price by anchoring to the outcome or business impact the solution would drive. This is generally a sales best practice. But at a time where the line between build and buy grows ever thinner, when customers have built their own agents, understand inference costs, and have seen how thin some product wrappers actually are -- value-based feels untethered. The age of AI doesn't mean I'm going to build every tool in Claude Code. But higher buyer literacy DOES mean that the relationship between your sticker price and your underlying cost-of-goods-sold is going to get more scrutiny. I'm happy to rent your application layer (and the expertise it encodes) and pay a reasonable cost for maintenance and support. But charging a disproportionate premium that's not moored in variable costs like inference and compute seems increasingly hard to justify. If your pricing can't survive a conversation with a buyer who's been tinkering in Claude Code for six months, you have a problem.
59
17 Comments -
Brian Bickell
TextQL • 3K followers
There’s really nothing like taking a new major feature release out to the market on the first day of two back to back conferences. When we planned to roll out Cube D3, our agentic analytics platform, built on our market leading universal semantic layer, I was excited to get the in-person feedback. I spent more time working the booth at both Snowflake Summit and Databricks Data + AI Summit than most partnerships guys would, watching to see what resonated and what we still needed to refine. At first, most understood what we were doing, or at worst kind of disinterestedly said “oh another chatbot”. D3 being able to build and expose visual assets, as well as answer questions and provide result sets caught many folks' attention. That part clicked, because they could get from the demo we had on offer, to D3 being able to rapidly prototype visualizations that could be kicked out into popular front-end frameworks and hosted however they liked. What connected with everyone and pulled in even the most cynical was when we explained our semantic SQL. Semantic SQL is the rather simple looking SQL that D3 (or any consumer via our SQL API) is writing that Cube is rewriting into the complex warehouse SQL that eventually hits your data source of choice. Complex business metrics are defined once upstream providing for trust, governance and consistency. Compared to traditional text-to-sql approaches, we are breaking apart the place where things typically go wrong - generation of highly complex analytical SQL, without context for *exactly* what a user means when they ask for a metric. The result is the user can still ask for ad-hoc analysis built upon these metrics, but they are always going to get compiled down to the approved metric definitions under the hood without any LLM guesswork. We also expose a reasoning trace every step of the way so you can inspect why D3 did what it did. This is becoming standard for AI applications and we think it’s a great practice to incorporate. Cube D3 is currently in preview but if you’re interested drop me a line and I’ll help you get access.
26
1 Comment -
Josue “Josh” Bogran
zeb • 28K followers
There are a lot of data ingestion tools. David Yaffe & I had a conversation as to how objectively evaluate the different options out there. More importantly: what sets the good ones apart from the average ones for your data pipelines. Note: While David is the CEO of Estuary and I am an advisor to them as they serve the Databricks market more and more, there is no selling or self promotion here, just real, practical evaluation advice. Link: https://lnkd.in/evw6RgUU
19
4 Comments -
Irina Malkova
Salesforce • 6K followers
Data teams: ship 𝒂𝒏𝒐𝒕𝒉𝒆𝒓 Decisioning Agent v0.1 this week. We talked about how 𝘗𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘷𝘦 𝘔𝘓 𝘹 𝘢𝘨𝘦𝘯𝘵 = 𝘐𝘮𝘱𝘢𝘤𝘵 Here's another recipe: 𝘊𝘰𝘮𝘱𝘭𝘦𝘹 𝘥𝘢𝘴𝘩𝘣𝘰𝘢𝘳𝘥 𝘹 𝘢𝘨𝘦𝘯𝘵 = 𝘐𝘮𝘱𝘢𝘤𝘵 Use what you already have: - A set of dashboards that are popular but hard to use - An enablement doc explaining how to use those dashboards - A place where your users live, like Slack Add: - A simple agent that connects the dataset beyond the dashboard to Slack - A few simple agentic topics aligned with your enablement document Outcomes: - Faster time to decision - Builders see real demand - ROI on existing datasets rises In 2024, we built the first Decisioning Agent of this kind in just a week. It focused on accelerating time-to-quote for a complex Salesforce SKU. Before the agent, preparing a quote meant our sellers had to spend days navigating four dashboards and querying raw events. After, that same result is available in minutes - in their flow of work. We've come a long way from that MVP in a year. You can too if you start today. Let's ship!
28
7 Comments -
Christophe Atten
AI Finance Club • 1K followers
Skills over headcount Hiring more data scientists isn’t the only lever. Enabling Citizen Data Scientists (CDS) might deliver value faster. Enter CDS - Domain experts + low/no-code = workable models and insights. - They handle “simple to moderately sophisticated” analytics and pass the best ideas to DS for production. Operating model - Educate: core analytics concepts, basic stats, reading model outputs. 📘 - Equip: AutoML, metadata management, data glossaries, self-service marketplaces. ⚙️ - Enable: sandboxes to test safely without touching production. 🧯 Outcome - Higher throughput of experiments. - Better alignment between business needs and DS work. - DS time shifts to advanced science, not “1+1=2” requests. 👉 Question: If you launched a CDS program next quarter, what’s the first capability you’d teach? Bonus - How it works: 1️⃣ Governance 2️⃣ Controls 3️⃣ Education and limits ... the goal is giving the right tools at the right moment for the right tasks. 🔗 Full article published on Medium (link in comments)
7
2 Comments -
Sara Hillenmeyer
Payscale • 2K followers
AI in comp: cautiously optimistic or totally on board? We asked comp pros how they really feel about leveraging AI/machine learning—and the results are a mixed bag (with a lean toward optimism). 🤖 Most are open to AI for market pricing and pay recommendations. 🤖Sentiment drops off when it comes to policy documentation or compliance. 🤖 And across the board? Very few are totally against it. To me, this says: People aren't anti-AI. They're anti-black-box. If we want AI to be embraced in comp, it needs to be transparent, explainable, and built to support human expertise—not override it. And that's exactly how we're thinking about it at Payscale.
18
-
Michael Burton
Orr Fellowship • 12K followers
Most CMOs still think Databricks is "that thing my data team uses." That assumption is about to cost someone their job. Databricks just shipped a native Meta Conversions API integration on their Marketplace. Brands can now send first party data straight from their lakehouse to Meta for ad targeting and attribution. No middleware. No reverse ETL. No custom connectors. A data infrastructure company just built a direct bridge to the world's largest social advertising platform. Sit with that for a second. Eighteen months ago, activating warehouse data for paid media meant buying something in the middle. A reverse ETL tool. A CDP. Some category of software whose entire value proposition was: we move your data from A to B. Real contracts. Real budget. Now it's a free solution accelerator on the Databricks Marketplace. Here's the pattern, and it's not slowing down: the platforms on each end keep getting smarter and more directly connected. The middle keeps getting thinner. Databricks to Meta today. Braze already has native CDI ingestion from Databricks. The trend line is clear: if Google and TikTok aren't already in conversations with Databricks about native integrations, they should be. The edges are eating the middle. But here's what's actually interesting: the brands that get ahead of this don't just cut costs. They gain a performance edge. First-party data, clean and direct, no latency, no translation layer. Precision targeting built on data they actually own. Databricks isn't a data platform anymore. It's a marketing activation platform. The companies that figure this out first will outperform the ones still buying point solutions to do it for them. Not by a little. By a lot. The data platform IS part of the marketing platform. Not as a prediction. As a fact. So here's the only question that matters: is your marketing team in the room where your data strategy is being decided, or are they going to get handed a brief sheet when it's done? One of those CMOs is about to have a very good year.
71
5 Comments -
Abhinav Vadrevu
Snowflake • 2K followers
Today we're announcing that Semantic View Autopilot is generally available in Snowflake! When we GA’ed Cortex Analyst a year ago, we learned that the barrier to AI-powered analytics was never the LLM, it was data definition. When "Monthly Recurring Revenue" means different things to product and finance, AI agents inherit that inconsistency. No amount of prompt engineering fixes that. You need a governed semantic layer - but building one manually takes weeks. So we built a system that learns from how organizations actually use their data. SVA analyzes query history, Tableau dashboards, and trusted SQL to propose governed definitions automatically. Teams review and certify instead of coding from scratch. The more you use it, the smarter it gets. Since launching in preview, we've watched customers go from "I'll spend the next two weeks building a semantic layer" to "I had something working in my first session." Proud of the whole team that shipped this. Read the full announcement: https://lnkd.in/g7Dvhn9D Get started with the docs: https://lnkd.in/guMjU92w #Snowflake #AI #DataAnalytics #Cortex #SemanticView
439
17 Comments -
Lisa Sharapata
Metadata • 8K followers
Your GTM org isn’t broken because your team isn’t performing. It’s broken because your operating model wasn’t built for AI. I’ve been working with Metadata and talking to hundreds of marketers and GTM leaders, and the pattern is clear: AI isn’t just giving us better tools. It’s changing how the work gets done. The most forward-thinking teams aren’t using AI to write more copy or spin up more ads. They’re setting up agentic systems that run 24/7—testing, optimizing, learning and doing the boring, repetitive stuff faster and better than any human ever could. We’re talking: --> Automated bid optimization --> Instant insight extraction --> Real-time analysis and adjustments Not assistive. Not duct-taped on. → Agentic GTM is structural. At Metadata, we’re not layering AI onto old playbooks—we’re building entirely new frameworks designed for autonomous execution. On June 26th, I’m joining Hard Skill Exchange AI Practice Session: Agentic Scaling alongside some of the most innovative GTM minds in SaaS. We’ll be sharing what’s already working in production: + The agentic GTM stacks leading teams are deploying + Real use-cases + How to rebuild GTM with AI This isn’t about working faster. It’s about designing systems that work without you. Join us: 👉 https://lnkd.in/e7sXdp-2 Learn directly from: Gil Allouche, Founder & CEO, Metadata Amos Bar-Joseph, CEO & Co-Founder, Swan AI Tooba Durraze, Ph.D. Durraze, Founder & CEO, Amoeba AI Lily Austin, Head of GTM Innovation, SalesLoft Aaron McReynolds, Co-Founder & CEO at Alysio Jose M. Romero, VP of Product and Platform, Xfactor Angela Ferrante Head of Enterprise Marketing, Zapier Matt Cooley, Co-Founder and COO, Bounti.ai Ashar Rizqi, Co-Founder and CEO, Bounti.ai Patrick Spychalski, Co-Founder, The Kiln, a Clay agency Tori Jeffcoat, Director, Product Marketing, Gainsight Roman Dalichow, VP of Customer Success, Gainsight Agentic Scaling isn’t a feature or a tool. It’s a control layer for revenue. It’s not just about leading people. It’s about designing systems that outperform people. #AI #GTM #AgenticScaling #Metadata #AIPracticeSessions #B2BMarketing #FutureofWork
57
6 Comments -
Josh Klahr
9K followers
As we have been working with customers who are adopting Snowflake Semantic Views, one of the top requests has been a native integration with dbt Labs for materialization and referencing of Semantic Views within dbt models. I’m excited to announce that there is now a new dbt package, available on Package Hub, that does just this! You can read Yutsing Liu's blog here to learn more: https://lnkd.in/deXJtJBQ In summary, the dbt_semantic_view package introduces a new semantic_view materialization for dbt Core projects targeting Snowflake, enabling you to define business-centric semantic views (tables + relationships + dimensions + metrics) via dbt and automate their creation, renaming, and dropping. Link to package is in the comments.
365
17 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content