Volume 4 of "What We're Actually Using AI For" is live on our Substack! This installment covers building dashboards that replace slide decks, setting up custom instructions so AI works the way you work, and using AI as a structured data entry assistant for a multi-jurisdictional research project. What doesn't it include? Hype. Flashy demos. These are just real use cases from the VAILL team doing our real work. This is the heart of what we teach: you don't need to be a developer or an AI power user to get genuine value. You just need a task worth tackling and some willingness to experiment. Read Vol. 4 here: https://lnkd.in/eFMs2udE
AI Use Cases: Real Applications for Business and Research
More Relevant Posts
-
Everyone on our team is building something different with AI right now. It's getting nuts. > One person is piping experiment results into Claude to auto-generate stakeholder narratives. > Someone else built a customer insights system that cross-references call transcripts against survey data. > Another is using Claude Code to parallelize analysis agents that verify their own outputs before synthesizing anything. > We have a few building evaluation/discovery agents for specific touch points (e.g., SaaS pricing pages) It's a little chaotic. In the best way... but also in a real way. I think most experimentation teams are in the same spot. You can feel the surface area of what's possible (voice-to-blog-post pipelines, persistent brand memory files, searchable learnings repositories, agent workflows that cite their sources). But the gap between "I can see how this would work" and "I actually built it and use it every week" is still massive for most people. The bottleneck isn't ideas. It's structured reps... thats the crux IMO This is a big part of why we built Circus London (May 11-12, speero.com / circus) the way we did. Not only talks about AI. Hands-on workshops where you walk out with working systems: Caitlin Sullivan: Building Customer Insights Systems. You'll build a multi-agent analysis system that handles mixed-methods data (transcripts, survey verbatims, quant data). The agents have built-in verification and feedback loops. They check their own work against source data. Cross-reference findings across sources and flag contradictions. This is the kind of thing that sounds like a demo until you actually build it yourself and realize... oh, this changes how my team operates. Oren Greenberg: Experimenting with AI — From Vibes to Workflows. This one hits close to home. You'll build structured context files that give AI persistent memory of your business (brand voice, ICP, strategic priorities). A workflow that turns a voice recording into a fully structured blog post. An experiment insights system that produces stakeholder narratives, next-test hypotheses, and a searchable learnings repo from raw results. Reusable patterns you adapt after the workshop. Our team is still figuring out the right way to sequence all of this internally. Which workflows to standardize, which to let people explore on their own, how to move from "everyone's tinkering" to "this is how we operate now." That transition is the hard part. But the pattern I keep seeing is: the teams that get hands-on with structured builds (not just prompting, not just watching demos) are the ones who actually cross that gap. What's the AI workflow your experimentation team is closest to actually operationalizing? Genuinely curious where people are.
To view or add a comment, sign in
-
-
Here's what I actually use AI for: -Brain dumping raw voice memos, call transcripts, and scattered notes into one inbox and having AI read, sort, and file everything into the right project folder. -Created an agent that views my calendar for daily prospect calls and researches the company, competitors, key contacts, and estimated pain points with our recommended solutions, delivered to me at 8am every morning. -Searching 2 years of personal notes in plain English, then writing new notes that automatically cross-reference everything I've already captured -Building SQL queries from natural language about the data I want and turning it into custom dashboards. -Drafting cold outreach that references specific details I've written about the prospect's company. -Generating a full 20-slide HTML presentation with custom branding and animations from a rough outline in 5 minutes There's so much more you can do with AI other outside of question -> answer. The fun part is we are still discovering what's possible.
To view or add a comment, sign in
-
Yeah so -- still super important in the age of AI. I only use a few advanced formulas on a daily basis... 👇 Don’t get me wrong -- Excel has some incredible functionality, especially these days as AI gets integrated into modeling more and more. Nonetheless, I still find a lot of it complicated (and honestly, unnecessary for day-to-day modeling). Here are the only "advanced" formulas I really use: – INDEX/MATCH (sometimes XLOOKUP) – SUMIFS – SUMPRODUCT Btw, these are usually for summary tabs -- not the detailed guts of the model. (Think dashboards) ~~~ 📥 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲: Here are some extra shortcut keys 👉 https://lnkd.in/e7DJFDxt ~~~ ⸻⸻⸻ Everywhere else, I pretty much live here: – SUM – AVERAGE – Direct cell links ( = ) – Plus ( + ) – Minus ( - ) – Divide ( / ) – Multiply ( x ) Just basic Excel 101. That’s 95% of my modeling. Yes, sometimes I have to get fancy and calculate IRR or use dynamic array functions. Or combine a few of the functions above to achieve a result. But those moments are rare -- especially in standard FP&A operating models. And after all these years, I’ve learned one thing: ~~~ 💡 𝙏𝙝𝙚 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙬𝙞𝙡𝙡 𝙖𝙡𝙬𝙖𝙮𝙨 𝙘𝙤𝙢𝙥𝙡𝙞𝙘𝙖𝙩𝙚 𝙩𝙝𝙚 𝙛𝙞𝙡𝙚. 𝘿𝙤𝙣’𝙩 𝙡𝙚𝙩 𝙩𝙝𝙚 𝙛𝙞𝙡𝙚 𝙘𝙤𝙢𝙥𝙡𝙞𝙘𝙖𝙩𝙚 𝙩𝙝𝙚 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨. ~~~ So, keep it simple where you can (to be user-friendly). Make it scalable on the back-end (to reduce error). And only complicate it when you truly have to. ⸻⸻⸻ 𝗔𝗻𝗱 𝗳𝗼𝗿 𝗲𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝘄𝗵𝗼'𝘀 𝗮𝗯𝗼𝘂𝘁 𝘁𝗼 𝘀𝗰𝗿𝗲𝗮𝗺: "why does this matter when [AI du jour] can do this automatically??" You can't audit what you don't understand. If you're an advanced user who's been building models for years, then yeah, using AI to speed up things like dashboards, etc. is a perfect use case. But if you don't know how to double-check what you're reviewing, you're setting yourself on a dangerous path. We're all so lost in tool creation these days, that we forget our boss, board, or LPs still care about big picture stuff: Revenue, EBITDA, Cash, ROI, IRR. And ultimately 𝘆𝗼𝘂 are the one that's accountable. So get the fundamentals first, 𝘵𝘩𝘦𝘯 use AI to scale. I can teach you here 👉 https://bit.ly/FMECourses
To view or add a comment, sign in
-
-
🚀 Copilot in Power BI is changing the way we work with data With the integration of AI, analyzing data is becoming faster and more intuitive than ever. 💡 With Copilot in Power BI, you can: • Ask questions in natural language • Generate DAX formulas • Build dashboards in seconds • Get instant insights and summaries What used to take hours can now be done in minutes. For me, the real value is not just productivity — it's making data accessible to everyone, even non-technical users. 📊 We are moving from “building reports” to “driving decisions with AI”. #PowerBI #Copilot #AI #DataAnalytics #BusinessIntelligence #MicrosoftFabric
To view or add a comment, sign in
-
Check out the upcoming Analytics AI webinar on 3/18 at 2p ET. Everyone is racing to deploy AI. But the reality is: AI is only as accurate as the data context behind it. Without a semantic layer that defines business metrics and relationships consistently, AI models can produce conflicting or misleading answers. A strong semantic layer turns raw data into shared business meaning so AI agents, analytics, and teams are all working from the same truth. If you're thinking about scaling AI across the enterprise, this is a critical (and often overlooked) foundation. https://lnkd.in/emWnZN_c #AI #Data #Analytics #Tableau #DataStrategy
To view or add a comment, sign in
-
Tuesday's Data Debug meetup was incredible. Three talks, three very different angles on how practitioners are actually using AI in their day-to-day work: Claire Gouze on context management for AI analytics agents. She benchmarked how different context configurations affect agent reliability. The results were eye-opening. Kasia Rachuta on integrating AI into data science workflows. From text classification in Snowflake Cortex to automating Slack responses with a 50-page internal doc, she covered the practical stuff that actually saves time (and when it doesn't). I gave a talk on building AI skills, the self-improving markdown files I use to get better output from coding agents over time. All three talks (and previous ones) are up on YouTube now: https://lnkd.in/gRV4v6hH Thanks to everyone who made it! We had so many returning faces & it was our largest crowd yet! Data Debug is monthly. Next one in April. Email me if you're interested in speaking.
To view or add a comment, sign in
-
-
Data Debug is one of my favorite meetups in SF. I always leave with new perspectives and meet amazing data practitioners. As I'm building AI workflows every day, the talks couldn't have been more relevant. My takeaways: 1. Best context for an analytics agent: data schema + sample + a rules.md file. Claire Gouze full benchmarking write-up is worth a read: https://lnkd.in/gGghUQWJ 2. Build a self-improving loop: code → review → handoff. Have a "staff analytics engineer" persona review generated SQL, then capture decisions as rules so your agent gets better over time. 3. Accurate data documentation matters even more at scale as business rules and data dictionaries become shared context across the whole org.
Tuesday's Data Debug meetup was incredible. Three talks, three very different angles on how practitioners are actually using AI in their day-to-day work: Claire Gouze on context management for AI analytics agents. She benchmarked how different context configurations affect agent reliability. The results were eye-opening. Kasia Rachuta on integrating AI into data science workflows. From text classification in Snowflake Cortex to automating Slack responses with a 50-page internal doc, she covered the practical stuff that actually saves time (and when it doesn't). I gave a talk on building AI skills, the self-improving markdown files I use to get better output from coding agents over time. All three talks (and previous ones) are up on YouTube now: https://lnkd.in/gRV4v6hH Thanks to everyone who made it! We had so many returning faces & it was our largest crowd yet! Data Debug is monthly. Next one in April. Email me if you're interested in speaking.
To view or add a comment, sign in
-
-
Google released extensive learning resources for AI Agents 10+ code samples, whitepapers, hands-on projects, and more. This is an exceptional resource for anyone starting with AI Agents. 𝐃𝐚𝐲 𝟏: 𝐈𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐭𝐨 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 Learn the difference between a simple chatbot and a real agent. Build systems that can plan and take action independently. Whitepaper: https://lnkd.in/dfQcgXjS Code Resource 1: https://lnkd.in/eTJ5Tb-5 Code Resource 2: https://lnkd.in/e_6-p_7b 𝐃𝐚𝐲 𝟐: 𝐓𝐨𝐨𝐥𝐬 𝐚𝐧𝐝 𝐌𝐂𝐏 Give agents power by connecting them to other software and APIs. Learn the Model Context Protocol to integrate multiple systems. Whitepaper: https://lnkd.in/dn6u6prB Code Resource 1: https://lnkd.in/ey9ss5yz Code Resource 2: https://lnkd.in/eSiE2TZK 𝐃𝐚𝐲 𝟑: 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 & 𝐌𝐞𝐦𝐨𝐫𝐲 Set up long-term memory so agents retain and learn from every interaction. Whitepaper: https://lnkd.in/ddB-uFtm Code Resource 1: https://lnkd.in/eWdBKnXy Code Resource 2: https://lnkd.in/eKct9uxh 𝐃𝐚𝐲 𝟒: 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 & 𝐎𝐛𝐬𝐞𝐫𝐯𝐚𝐛𝐢𝐥𝐢𝐭𝐲 Use logs and metrics to track errors. Leverage "AI judges" and human feedback to improve agent performance. Whitepaper: https://lnkd.in/dw3aQXB6 Code Resource 1: https://lnkd.in/eBiP85JG Code Resource 2: https://lnkd.in/edC7qCBN 𝐃𝐚𝐲 𝟓: 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧-𝐑𝐞𝐚𝐝𝐲 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 Move from test scripts to real products on Vertex AI. Enable multiple agents to collaborate with evaluation gates, circuit breakers, and evolution. Whitepaper: https://lnkd.in/dBa_c8gj Code Resource 1: https://lnkd.in/e4w2NEgh Code Resource 2: https://lnkd.in/evCj_uu2 A real AI agent is a full system, not just a single prompt. For startups building scalable AI products, Lab7ai helps turn this knowledge into real products and systems that last. #startup #AIStartups #StartupGrowth #AIAgents #TechStartups #AIProducts #ProductDevelopment #Lab7AI
To view or add a comment, sign in
-
-
🚀 RAG is brilliant, but is it starting to show its limits? Enter ARAG (Agentic RAG). If you’ve been building generative AI applications, you know that Retrieval-Augmented Generation (RAG) was the ultimate game-changer for reducing hallucinations. But as our enterprise use cases get more complex, a simple "retrieve and generate" pipeline just isn't cutting it anymore. That’s where Agentic RAG (ARAG) steps in. Here is the easiest way to think about the shift: Standard RAG is like taking an open-book test where you are only allowed to look at the index once. You search your query, grab the top 5 results, and hope the answer is in there. ARAG is like hiring a professional researcher. It doesn't just blindly search; it thinks about how to search, evaluates what it finds, and decides if it needs to dig deeper. Here is how they stack up against each other: 🔍 Standard RAG (The Linear Pipeline) - One-Shot Retrieval: Takes the user's prompt, embeds it, and retrieves - documents in a single pass. - Query Dependent: If the user writes a poor prompt, the retrieval will be poor. - Static: It assumes all questions require the exact same retrieval process. - Vulnerable to Noise: If irrelevant chunks are retrieved, the LLM gets confused. 🧠 ARAG / Agentic RAG (The Reasoning Pipeline) - Dynamic Routing: An LLM agent decides if it even needs to retrieve information, or if it can answer from memory or use a different tool (like a SQL database or calculator). - Iterative Searching: If the first search doesn't yield the right answer, the agent rewrites the query and searches again. - Self-Correction: It evaluates the retrieved documents for relevance before sending them to the generation step. If the context is bad, it tosses it out. - Multi-Step Reasoning: It can break down a complex, multi-part question into smaller sub-queries, retrieving information for each piece independently. The Takeaway: Standard RAG is still perfect for straightforward Q&A chatbots on clean documentation. But if you are building enterprise AI that needs to handle messy data, complex multi-part questions, and require high accuracy, Agentic RAG is the new baseline. Have you started transitioning your pipelines from standard RAG to Agentic RAG? What has been your biggest hurdle? Let’s discuss in the comments! 👇 #GenerativeAI #RAG #AgenticRAG #LLMs #ArtificialIntelligence #MachineLearning #DataScience #AIArchitecture
To view or add a comment, sign in
-
-
Most AI strategies fail before they start. Here's why: Not because the technology doesn't work. Not because the budget wasn't there. Because the thinking behind it was soft. I've been working through Snowflake's Generative AI and LLMs For Dummies this week — and while the title sells it short, the commercial clarity inside doesn't. Five things every business leader should actually understand before signing off on an AI initiative: 1. Your most valuable data is the data you're ignoring. 80–90% of enterprise data is unstructured — locked in documents, audio, emails, and customer conversations. Most AI projects start with clean spreadsheets and wonder why the results feel shallow. The competitive edge lives in the messy layer. 2. Bigger models aren't always smarter decisions. A 117 million parameter model handles summarisation just fine. A 175 billion parameter model handles complexity. Matching model scale to use case is a cost-performance call, not a prestige one. More horsepower in the wrong gear doesn't move you faster. 3. Prompt engineering is a marketing skill. The clearer and more contextual your input, the sharper the output. Marketers who understand how to brief an AI model will consistently outperform engineers who don't think in audience and intent. Sound familiar? 4. RAG solves the problem nobody talks about. LLMs don't know your business, your products, or last quarter's numbers. Retrieval-augmented generation bridges that gap, connecting models to your real data without rebuilding them from scratch. It's the difference between a generic answer and a useful one. 5. The three Hs of responsible AI: Helpfulness, Honesty, Harmlessness. Simple enough for a client deck. Important enough that it should be in every AI strategy conversation you're having. If your vendor or agency can't speak to all three, the governance conversation hasn't started yet. Getting these five fundamentals right is exactly where I spend time with clients at MRB Advisory, cutting through the noise and building AI strategy that's commercially grounded, not just technically impressive. Feel free to donload the attached Top 5 summary card. Source: Generative AI and LLMs For Dummies, Snowflake Special Edition — worth an afternoon of anyone's time.
To view or add a comment, sign in
-
Explore related topics
- Using AI For Better Legal Analytics
- AI Use Cases for Law Firm Operations
- Use Cases for AI in Content Development
- Best Use Cases for AI Models
- Use Cases for Sub-LLMs in AI Projects
- Real-World Uses for Agentic AI
- How to Use AI Agents in Legal Workflows
- How to Use AI for Manual Coding Tasks
- How to Use AI Instead of Traditional Coding Skills
- How to Use AI for Routine Task Automation