"Be not the first by whom the new are tried, Nor yet the last to lay the old aside." -Alexander Pope I haven't commented much about AI here, but the quote above pretty much sums up my current feelings on the subject. I've watched curmudgeons say all AI is terrible or dangerous and just about shouldn't be used at all. On the other hand, I've seen AI fan boys act like anything not AI is a waste of time. Truth is somewhere in the middle, methinks. AI is not (or shouldn't be) an end to itself. It's a tool. A potentially powerful tool, yes. But just a tool -- and one among many. As another saying goes, "He who is good with a hammer tends to think everything is a nail". No tool is optimal for every need. The part we don't seem to have settled on is where exactly AI fits in our respective toolbags. To that end, I'm glad we have the early adopters trying to stretch it and break it to see what it's really good for. Once a consistent path emerges, I'm happy to travel it too. Use whatever tool seems most appropriate to you for your task. But, if your work affects something like flight safety, you own your work output regardless of tool used. In a post-incident review, it won't be acceptable to say "But, but, but... ChatGPT (or Copilot, Claude, Galaxy, etc.) said...."
Finding the right balance with AI: A tool among many
More Relevant Posts
-
Markets move while you sleep. But what if your AI didn’t? OpenAI just launched ChatGPT Pulse. Think of it as a morning market brief powered by your own AI. Here’s what it delivers: → Overnight macro & market updates → Key news distilled into 3-line insights → Personalized dashboards tied to your portfolio or calendar → Actionable signals instead of endless noise For finance pros, that means less time scanning, more time positioning. It’s still early access. Mobile only. Pro users for now. But the takeaway is big: AI isn’t just answering — it’s anticipating. And in finance, anticipation = alpha. If Pulse greeted you tomorrow, would you want earnings previews or macro risk alerts first?
To view or add a comment, sign in
-
I already expect a lot more from AI than I do from people. People talk about this a lot with reliability (see self-driving cars), but I notice it most with context. I’d never expect a friend to read my mind. But I’ll ask ChatGPT “what’s that movie with the car chase” and then get annoyed when it can’t guess. In reality, much of the world still isn’t digitized. And even when it is, that doesn’t mean your AI tool has access to it. The products and businesses that are going to really win with AI are going to be the ones that figure out how to pull in offline context and connect all their scattered data. For me, I record all my meetings and only type my notes so I can one day feed them into a system. (If anyone’s found a platform that actually does retrieval well on lots of notes, let me know.) At BoltWise, a big chunk of our time is spent figuring out how to give our models customer context without adding extra work for our customers. Next time you get frustrated with AI, maybe ask: could a random person even answer that?
To view or add a comment, sign in
-
Every feed I scroll lately says the same thing. "AI shouldn't replace people, it should make them better." Totally agree (and glad most of us are on the same page there). But here's what I'm actually seeing happen: Teams where every person has their own personal ChatGPT tab open. Each creating work in isolation. No shared context. No connection to the company's strategy, voice, or standards. The output? Disjointed. Disconnected. Generic. Because here's what most people miss: AI doesn't make your team better just by existing. It makes them better when it knows what they know. For us, we've learned that context is the key. That's why we build what we call Business DNAs - an LLM preloaded with everything a team needs to be successful. Their processes. Their standards. Their voice. Their history. When teams have that kind of shared intelligence, things shift: No more late-night Slacks asking "is this right?" No more playing telephone from leadership to the team. No more hoping everyone's on the same page. I get it. Everyone's rushing to adopt AI so they don't get "left behind." But most teams are overcomplicating it... jumping on tools that are more than they need, without the basic systems in place first. You don't need another AI tool. You need AI that actually knows your business. What would change for your team if everyone had access to the same context, instantly?
To view or add a comment, sign in
-
What’s one way you’ve used AI to simplify a task? Hours of research used to mean…well, hours of research. Now, thanks to Agent Mode in ChatGPT, I can scan the web for opportunities in minutes — and spend the rest of the time actually putting those insights to work. It’s a great reminder that AI isn’t about replacing the work we do — it’s helping us do more of it, faster and smarter.
To view or add a comment, sign in
-
Most people look at AI like it’s supposed to be the end-all, be-all solution. But I’ve started to see it differently. For me, AI is more like a calculator. Do you remember when the TI-83 Calculator was first released? It changed the way we worked. But it was limited. It was only a tool. A tool to assist in specific areas of business—not a replacement for human creativity, discernment, or connection. That’s why I’ve been exploring building my own local LLM using LM Studio. With LM Studio you can run local AI models like gpt-oss, Qwen, Gemma, DeepSeek and many more on your computer, privately and for free. Something lightweight, private, and business-focused—unplugged from the constant intertwining of accounts and data collection. The goal isn’t to chase hype, but to create healthy practices for how AI shows up in my workflow. This comes at a time when concerns around platforms like ChatGPT are growing. We’re starting to see not only its power but also its negative impacts on human behavior, thinking, and even entire industries. That’s a signal to pause and ask: what’s the healthiest way forward? For me, that means treating AI as a business tool, not a crutch. And maybe, for some of us, it even means stepping away from AI entirely. Curious—how are you navigating AI in your own workflow? Are you going all-in, pulling back, or finding your own middle ground?
To view or add a comment, sign in
-
-
There is growing panic about AI models "stealing" web traffic. We get it, every marketer is hearing the same thing: "People won't visit your website anymore because they'll just ask ChatGPT." But here's the reality from data across our clients: Less than 1% of total website traffic is currently coming from LLM sources. That means people are still visiting your websites to research, compare, and convert. Yes, the landscape is shifting. And yes, AI is changing discovery. But it's not time to panic, it's time to measure, adapt, and plan, not chase headlines.
To view or add a comment, sign in
-
-
One of the most common mistakes people make with AI is action before planning. We all jump in saying: do this, make that, I want this. But very few take a step back to plan. These systems are built to work through interaction. They get context little by little, and align as the conversation grows. That’s why they work best like a real conversation. So next time—don’t just throw a task. First ask the right questions: what, how, where. Then let AI do the work with clarity whether you’re using ChatGPT, Gemini, Claude, or any other chat model out there.
To view or add a comment, sign in
-
-
One of the most common mistakes people make with AI is action before planning. We all jump in saying: do this, make that, I want this. But very few take a step back to plan. These systems are built to work through interaction. They get context little by little, and align as the conversation grows. That’s why they work best like a real conversation. So next time—don’t just throw a task. First ask the right questions: what, how, where. Then let AI do the work with clarity whether you’re using ChatGPT, Gemini, Claude, or any other chat model out there.
To view or add a comment, sign in
-
-
Conversation with a CEO recently "Most of that AI stuff is just hype. Sure ChatGPT helps me and my staff a bit. But I can't see how it will really benefit my business. It's not like it can actually help me make more money" What do you think? Over the next 14 days, I’ll show you 7 practical AI systems that actually unlock savings, time and profit.
To view or add a comment, sign in
-
I'm currently testing an AI system for a client and struggling to approach the testing like a traditional app. With a normal system, you can map a requirement to a clear outcome. With AI, the “requirement” is often a prediction, a probability, or a behaviour that shifts over time. Depending on the inputs, the underlying AI model and numerous other factors. That’s the challenge. How do you test something that doesn’t always give the same answer twice? How do you measure accuracy, trust, or drift in a way that stakeholders can actually understand? As a tester, I’ve learned to accept that AI won’t ever give perfect certainty. Instead, I focus on what can be controlled: the data, the assumptions, the transparency of results. This is where I bring the stoic testing approach. The role of the stoic tester is to stay grounded, ask the hard questions. So how do we test? By holding to what can be controlled. The quality of data. The clarity of assumptions. The fairness of outcomes. We question, we measure and we challenge. As the Stoic Tester I try to remember: what is outside our control must be observed, not ignored. #thestoictester #aitesting
To view or add a comment, sign in