What if most of the tasks that AI automates were never necessary in the first place? I ask because evangelists of AI promise that LLMs offer huge dividends for efficiency. Our lives will be easier, they argue, as an array of tedious but necessary tasks are offloaded to our AI-EAs. Lately I've been thinking that the real offer is something else. I noticed this angle when it seemed that a comment (by a real person) on one of my posts was obviously AI generated. I wondered "what was the point of that? What efficiency does that really offer?" LinkedIn's founder configured the social network as an extension of the office, and it is only through that lens that such behaviour make any sense. The point of such AI comments seems not to be about making a connection but instead to perform the act of networking, of work. It looks like a shift towards a moment where our AI avatars write posts and other AI avatars respond. So that the office looks busy while no one is home. The use of AI for the performance of work is not confined to LinkedIn, because the performance is not new. In 2013 David Graeber pointed out the phenomenon of "BS jobs" and I've found his observations useful for thinking about LLM futures. He argued that a significant proportion of jobs are pointless, not just because he said so, but because he asked people what they did and they told him as much. One of the key takeaways from "BS Jobs" was that increases in efficiency over time have not led to less work, instead the moral configuration of labour has meant that performance has expanded to fill the gap. Rather than the immense productivity of industrialisation ushering in lives of ever increasing leisure and freedom, instead we've created the performance of work to maintain appearances. LLMs promise to automate much of this performance - from the production of endless reams of unread professional documentation to the drafting of junk mail by the tonne. So what does that mean? On the one hand my post about the cringe nature of AI points to scepticism towards not just AI automation but also the value of the tasks it automates. Yet at the same time the responses I've encountered to AI's promise makes me concerned. Some announce with relish "if I can offload x task to AI and that will free up more time for that real work I've been kept from". Others say "most of my job is done by chatGPT these days but thankfully my boss turns a blind eye". These sentiments offer a future where the primary result of AI's productivity is a further deepening of the performance of work for it's own sake, with all the accompanying malaise and insecurity. Time will tell if my concerns are unwarranted, and you can tell me: 2 years after ChatGPT's arrival, has your workload lessened? Has your sense of purpose increased? Your leisure time? Do you feel more secure in your position alongside the arrival of so much efficiency? Or is the office empty while everyone is just as busy as they ever were?
William Scates Frances’ Post
More Relevant Posts
-
Your team might already be experimenting with AI — with or without your knowledge. At TechCXO, we’ve seen this “shadow AI” risk escalate: data leaks, inconsistent practices, competitive leakage. In our latest blog, we map a 5-step roadmap to convert rogue AI use into a secure, strategic advantage — and show how CTO + CMO alignment is critical. (🔗 Read here: https://lnkd.in/gptZwQzb) How is your org managing unsanctioned AI tools today? What’s been the biggest challenge? #AIstrategy #ShadowAI #CTO #Governance #Innovation TechCXO / TechCXO Product and Technology / TechCXO Mid-Atlantic
Are your employees already using ChatGPT, feeding proprietary data into consumer tools, or debugging code with AI? The answer is likely yes. While leaders are still debating an AI strategy, employees are already running with it. This creates major risks—from data loss to losing top talent to competitors. So, how do you turn this "shadow AI" from a liability into your greatest asset? Our latest blog provides a clear roadmap. We cover why a strong partnership between your CTO and CMO is essential and give you a 5-step checklist to move from chaotic AI use to a secure, strategic, and scalable approach. Don't wait. Learn how to lead your organization's AI adoption journey: https://lnkd.in/gptZwQzb Authors: Matt Oess and Bryan Dennstedt🌱 #AI #TechLeadership #DigitalTransformation #Strategy #CIO #CTO #CMO
To view or add a comment, sign in
-
Are your employees already using ChatGPT, feeding proprietary data into consumer tools, or debugging code with AI? The answer is likely yes. While leaders are still debating an AI strategy, employees are already running with it. This creates major risks—from data loss to losing top talent to competitors. So, how do you turn this "shadow AI" from a liability into your greatest asset? Our latest blog provides a clear roadmap. We cover why a strong partnership between your CTO and CMO is essential and give you a 5-step checklist to move from chaotic AI use to a secure, strategic, and scalable approach. Don't wait. Learn how to lead your organization's AI adoption journey: https://lnkd.in/gptZwQzb Authors: Matt Oess and Bryan Dennstedt🌱 #AI #TechLeadership #DigitalTransformation #Strategy #CIO #CTO #CMO
To view or add a comment, sign in
-
🚨 The Rise of Agentic AI: The Future of Work Is Already Here! A few years ago, AI tools could only respond when you asked them to. You’d give a prompt, they’d give an answer. End of story. But something new is happening. AI is no longer waiting for us. It’s starting to act. Welcome to the Agentic Era of AI, where technology is transforming from a passive assistant into an active collaborator. What Exactly Is Agentic AI? Imagine working with a colleague who doesn’t wait for you to tell them what to do. They understand the goal, plan the steps, use available tools, and report back when the job is done. Real-World Examples 1. Smart Personal Assistants Imagine saying: • “Plan my client meeting next week, brief me on their recent news.” • An AI agent can browse, summarize, schedule, and prepare the report all on its own. 2. Software Development Developers are now using AI agents that read entire codebases, fix bugs, and even create pull requests, freeing engineers to focus on design and innovation. 3. Enterprise Automation From processing invoices to rerouting supply chains, agents are managing tasks end to end, keeping businesses running even when humans log off. Why This Matters Agentic AI isn’t just another tech upgrade. It’s a shift in how we work. Here’s why: ✓ From tools to teammates: Agents act like proactive partners. ✓ Faster operations: They automate multi-step workflows humans used to do manually. ✓ More innovation: With routine work handled, humans can focus on creativity and strategy. It’s not about replacing people. It’s about augmenting human capability, letting us focus on what truly requires our judgment, empathy, and imagination. ⚠️ A Note on Ethics & Trust As AI gains autonomy, trust and transparency become critical. Who’s accountable if an AI makes a bad call? Companies must set boundaries, clear governance, audit trails, and human oversight. Agentic AI should collaborate, not operate unchecked. The Leaders Building This Future: • OpenAI is rolling out ChatGPT agents that browse, code, and manage workflows. • Google DeepMind’s Gemini 2.0 enables real-time tool use and even controls robots. • Anthropic focuses on agent safety and open standards for connecting tools and data. • The competition is fierce, and it’s accelerating innovation at a historic pace. Key Takeaway! We’re entering a new phase of human-AI collaboration. Agentic AI will reshape how we build, lead, and make decisions. The question isn’t if it will transform your work, but how soon. The best way to prepare? 🚨 Start learning, start experimenting, and start imagining how AI can become your next teammate. Here to walk with you on this journey of AI. let's connect Arnol Fokam
To view or add a comment, sign in
-
I think Gary Marcus has among the most realistic views on the benefits of AI: Quote: "In recent years, many have believed that the key to getting there [AGI] was to improve on generative A.I systems, such as ChatGPT. These systems create text, images, code and even videos by training on vast data sets of content produced by humans. They are broad in application yet accessible even to the most novice users of digital tools. Buoyed by the initial progress of chatbots, many thought that A.G.I. was imminent. But these systems have always been prone to hallucinations and errors. Those obstacles may be one reason generative A.I. hasn’t led to the skyrocketing in profits and productivity that many in the tech industry predicted. A recent study run by M.I.T.’s NANDA Initiative found that 95 percent of companies that did A.I. pilot studies found little or no return on their investment. A recent financial analysis projects an estimated shortfall of $800 billion in revenue for A.I. companies by the end of 2030. If the strengths of A.I. are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems. Because, frankly, they’re often more effective. Until the advent of chatbots, most A.I. developers focused on building special-purpose systems, for things like playing chess or recommending books and movies to consumers. These systems were not nearly as sexy as talking to a chatbot, and each project often took years to get right. But they were often more reliable than today’s generative A.I. tools, because they didn’t try to learn everything from scratch and were often engineered on the basis of expert knowledge... Shifting focus away from chatbots doesn’t mean that researchers should give up pursuing A.G.I., which could eventually be more effective by inventing new approaches. And it doesn’t mean giving up on generative A.I. altogether; it can certainly still play a beneficial role in some specific tasks, such as coding, brainstorming and translation. Right now, it feels as if Big Tech is throwing general-purpose A.I. spaghetti at the wall and hoping that nothing truly terrible sticks. As the A.I. pioneer Yoshua Bengio has recently emphasized, advancing generalized A.I. systems that can exhibit greater autonomy isn’t necessarily aligned with human interests. Humanity would be better served by labs devoting more resources on building specialized tools for science, medicine, technology and education."
To view or add a comment, sign in
-
As a fairly newer user of AI tools, I found this guide helpful in my continuous learning quest. What other tools should I check out?
Corporate | MSP | Collaboration | Technical/Practical Solutions | Systems & Network Administrator | Support Specialist | Infrastructure | Cybersecurity | Compliance | Process Improvement | Training/Mentoring & Education.
There’s no single “best” AI tool. The advantage goes to those who know which one to use when. Different AI platforms excel at different tasks. Some are built for creativity, others for research. Some prioritize speed, others prioritize cost-effectiveness. Here’s when to use each: ChatGPT • Best for versatility across multiple use cases • Handles text, images, audio, video, and web browsing • Strong for coding, workflow automation, and plugin integrations • Ideal when you need one platform that does everything reasonably well Gemini • Built for the Google ecosystem • Seamlessly integrates with Gmail, Docs, Sheets, and Drive • Handles multimodal input and output effectively • Choose this if your work lives inside Google Workspace Claude • Designed for deep reasoning and accuracy • Excels at processing lengthy documents (200k+ tokens) • Strong for legal work, academic research, and policy analysis • Best when precision and nuanced understanding matter most Grok • Powered by real-time X (Twitter) data • Fast, responsive, and personality-driven • Monitors live events and breaking news effectively • Use when you need current information and witty engagement DeepSeek • Chinese-developed, cost-effective alternative • Strong technical and coding capabilities • Optimized for speed and large-scale workloads • Great for teams needing affordable AI at scale The smartest approach isn’t picking one and sticking with it. It’s knowing which tool fits which task. Which AI do you reach for most often, and what drives that choice? Image credit: Andrew Bolis
To view or add a comment, sign in
-
-
𝐒𝐭𝐚𝐲 𝐀𝐡𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐀𝐈 - 𝐁𝐞𝐟𝐨𝐫𝐞 𝐈𝐭’𝐬 𝐓𝐨𝐨 𝐋𝐚𝐭𝐞 We’ve seen more change in the way we work in the last two years than in the previous twenty. And the reason? AI. It’s no longer a futuristic concept. AI is already inside your daily workflow - editing your docs, writing your emails, summarizing your meetings, even preparing your next job interview. The shift is silent... but massive. Just look at how fast this happened 👇 ⚡ 2019–20: AI quietly powers Netflix, YouTube, and Google Search ⚡ 2021: GPT-3 sparks global curiosity ⚡ 2022: ChatGPT hits 100M users in just 2 months ⚡ 2023–24: AI tools explode, enterprises begin adoption ⚡ 2025: AI is everywhere - Gmail, Excel, LinkedIn, Canva, and beyond Now the big question - 👉 Are you adapting fast enough? Because the truth is, AI isn’t taking jobs. 𝐏𝐞𝐨𝐩𝐥𝐞 𝐮𝐬𝐢𝐧𝐠 𝐀𝐈 𝐚𝐫𝐞. 𝐖𝐡𝐞𝐫𝐞 𝐀𝐈 𝐈𝐬 𝐀𝐥𝐫𝐞𝐚𝐝𝐲 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐖𝐨𝐫𝐤 Across every function - HR, marketing, design, operations, testing, and tech - AI is doing things faster, smarter, and often, better. It’s writing proposals, creating videos, analyzing spreadsheets, and drafting emails. And soon, it’ll do things you haven’t even imagined yet. 𝐖𝐡𝐚𝐭’𝐬 𝐌𝐚𝐤𝐢𝐧𝐠 𝐀𝐥𝐥 𝐓𝐡𝐢𝐬 𝐏𝐨𝐬𝐬𝐢𝐛𝐥𝐞? ⚡ Foundation Models (GPT-4, Claude, Gemini) - trained on massive data ⚡ Cloud Infrastructure (AWS, Azure, GCP) - giving AI the power to scale ⚡ API Ecosystems - letting tools like Notion, Slack, Zoom talk directly to AI ⚡ Multi-Modal AI - now understands text, images, data, and even voice This isn’t just automation. It’s augmentation — a new way of working, thinking, and learning. 𝐓𝐡𝐞 𝐀𝐈-𝐑𝐞𝐚𝐝𝐲 𝐏𝐫𝐨𝐟𝐞𝐬𝐬𝐢𝐨𝐧𝐚𝐥 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤 🧩 Most professionals fall into one of these levels: Level 1 – 𝐂𝐮𝐫𝐢𝐨𝐮𝐬: You’re experimenting with tools like ChatGPT Level 2 – 𝐂𝐚𝐩𝐚𝐛𝐥𝐞: You use AI to complete some tasks weekly Level 3 – 𝐂𝐚𝐫𝐞𝐞𝐫-𝐒𝐦𝐚𝐫𝐭: You use AI strategically to save time, boost output, and grow your influence Most of us are between Level 1 and 2 - and that’s okay. The goal is to move towards Level 3 - where AI doesn’t just assist you, it amplifies you. If you’re serious about staying relevant, learn how to work with AI, not against it. Because five years from now, the biggest career gap won’t be between coders and non-coders. It’ll be between those who understand AI - and those who don’t. 🚀 𝐒𝐭𝐚𝐲 𝐜𝐮𝐫𝐢𝐨𝐮𝐬. 𝐒𝐭𝐚𝐲 𝐚𝐝𝐚𝐩𝐭𝐚𝐛𝐥𝐞. 𝐒𝐭𝐚𝐲 𝐀𝐈-𝐫𝐞𝐚𝐝𝐲.
To view or add a comment, sign in
-
-
I read an interesting article on the topic of AI implementation. As I sat and read this while having my Saturday morning coffee, a few points stood out which I'd like to share: 1. AI is not a layer sprinkled on top of an existing business system or process. 2. AI pilots are more likely to be successful when authority is shared and front-line teams help shape adoption. Avoiding a gatekeeper or a central control group. 3. Think process first - too many companies think software first. The majority of bottlenecks lie within disorganised data or inconsistent methodology. 4. Integration is often looked at as connecting multiple systems, when measuring concept to impact has shown better results. 5. The rise of shadow AI is not to be overlooked. The article shows that employees at over 90% of surveyed companies already use personal AI tools like ChatGPT at work, while only about 40% of companies have purchased official licenses. 6. Many projects are going ahead because business feel they need an AI initiative, vs focusing on solving a real business problem. This list could go on. But I feel these points are enough to give you some insight to the article. Give it a read, feel free to share your thoughts below 💡. https://lnkd.in/eqXjezYX
To view or add a comment, sign in
-
The State of AI 2025 report revealed two surprises. The first one? The "wow moment" 1,183 professionals had with AI in the last year: 28% said advanced coding capabilities. 22% were shocked by media generation breakthroughs. 17% didn't expect deep research and analysis capabilities. 11% were surprised by autonomous AI agents. The second surprise is more revealing: none of that flashy stuff is what businesses actually use. The flashy stuff delights us, but today's value comes from solving core business pains, not waiting for tomorrow's technology. The impressive demos? They're tomorrow's tools. The boring automations? They're today's competitive advantage. The actual adoption numbers tell a different story: • Content generation and writing: 60% • Documentation and knowledge retrieval: 58% • Meeting notes and transcription: 52% • Data analysis: 33% • Marketing automation: 19% • Customer service: 15% The businesses seeing success with AI aren't the ones chasing viral demos. They're the ones systematically eliminating friction from their operations. Companies are using today's AI to reclaim 10 hours a week. They're automating research that used to take days. They're generating content that would have required expensive agencies. This is the gap between spectacle and substance. Many small businesses are still using ChatGPT like it's a slightly smarter Google. Maybe they draft an email. Maybe they ask for a quick summary. That's not AI adoption. The question is no longer if AI will transform your business, but when really. What high priority task would you tackle first with AI automation?
To view or add a comment, sign in
-
By the time there’s an agreed AI policy and tools “approved”…. It’s likely that employees will have already run tens, or hundreds of thousands of prompts. On tools you don't know about. Solving problems you haven't identified or measured. Learning things they can't share. At our recent Customer Advisory Board, Stuart Winter-Tear shared a pattern he sees constantly: "Companies restrict employees to, say, Copilot because they're a Microsoft shop. Then when you interview frontline workers, they're all using ChatGPT anyway." Someone else described knowing of a global enterprise with a complete ban on generative AI platforms. Employees forced to use an in-house LLM that's "miles behind ChatGPT." So what happens then? People go rogue with external tools. But they can't share what's working because the tool they're using is banned. Think about it from the employee's perspective….. They’re getting more and more comfortable with AI tools in their personal lives. But they're forced to use inferior alternatives at work. So they keep using what actually works for them. I mean, if you could save time on a boring task and walk your dog instead, you'd take it, right? But no one can discuss their discoveries. Teams can't learn from each other. Zero psychological safety around AI experimentation. So some of your best people forced to go dark with their AI usage. Where’s the value to the organisation if they're solving real problems with tools they've chosen and mastered, but can't tell anyone? Meanwhile, the organisation is measuring the "success" of approved tools that people are using under duress. One member shared their enlightened approach: "We let people use what works for them, but we've set clear guardrails. Instead of trying to lock everything down….. we teach them not to put payroll data in ChatGPT and explain why that's risky." The people on the front line are for more likely to come up with better solutions to real problems. And that could be a major win for the business, but only if you create a safe environment for it. If you want innovation and growth, it has to be safe to experiment. Safe to fail. The fastest way to succeed is to fail in the open, learn, and iterate. A psychologically safe environment isn't one with no boundaries. You can (and should) set the guardrails. But shadow AI with no oversight is far riskier than transparent usage with proper guidelines and support.
To view or add a comment, sign in
-
Most Companies Are Asking the Wrong Question About AI A business owner shows us an article about AI agents or a vendor proposal promising "autonomous AI solutions." They ask: "Should we be doing this?" Wrong question. The real question is: "What would actually make our work easier this week?" Companies think there are only two options with AI. Use ChatGPT casually, or build some complex autonomous system. There's a huge middle ground people seldom talk about. That's where the actual value lives. The Problem With How We Talk About AI Do you use ChatGPT to help review contracts or proposals? Maybe to draft an email response or think through a project plan? It saves you maybe 20-30 minutes each time. Now you're asked if you should "scale this up." What does that mean? A better prompt? Some workflow? Or maybe you got pitched a six-figure solution to automate your operations. The demo looks incredible. But you process maybe 150 of these things a month. Is that enough volume? Is this overkill? You cannot tell. We see this constantly. People who know their business inside and out but are stuck because they don't know how to think about where AI fits. There's Actually a Spectrum Between "chat with AI when you need help" and "fully autonomous system," there's a range of options. Each works for different situations, volumes, and risk profiles. Most people miss this: The biggest returns happen in the middle. Not at the autonomous endpoint. The middle is where AI can access your information—search your files, pull data, run analyses—while you still make the decisions. The tedious stuff disappears. The judgment stays with you. Most companies never need full automation. The middle ground captures about 80% of the value at a fraction of the complexity and cost. The Questions You Should Be Asking Do you do this 50 times a month or 5,000 times? Is every case unique or are there clear patterns? What happens if there's an error? These questions determine what's worth building. Most people can't answer these objectively about their own operations. You're too close to it. You know what feels painful, but you might not see the patterns we see across dozens of companies. Where We Come In The companies getting real value from AI right now figured out what they actually needed, matched the tool to the problem, and knew when to stop adding complexity. We partner with clients to find that sweet spot. We understand your goals, your people, and your constraints. Two objectives: Make sure you get value from your investment, and make sure your teams actually adopt the tools. Together, we figure out what makes sense for you—real value, no unnecessary complexity. Then we build it so your team will actually use it. Want to figure out where AI could actually help your operations? Let's talk about where you are and what might make sense for you. The spectrum exists. The valuable work lives in the middle. Let's find your spot.
To view or add a comment, sign in
Explore related topics
- Tasks You Can Automate With LLMs
- AI-Generated LinkedIn Profile Feedback
- Pros and Cons of Using AI in LinkedIn Posts
- How AI Is Reshaping the Concept of Leadership in 2025
- The Changing Nature of Performance Metrics With AI
- Productivity Gains Offered by AI
- How AI Automation Changes Workforce Roles
- Impact of AI on Cognitive Performance
- How AI Is Changing Programmer Roles
- Reasons Generative AI Has Not Improved Workplace Productivity
It's one of the reasons the gap between the rich and the poor is so wide. The average worker does not see the benefit of improved productivity. The bottom line of the company and the C suite see the benefits. Hours stay the same but profits sky rocket, which is not passed down to the average worker. Did you know: if you earnt $100,000 a day since Jesus was born, you wouldn't even have 1/5th of the money Elon Musk currently has? No one should have that much money. That should have been disrupted to the workers that worked for the increased productivity!