You need to know where your data goes when using AI. OpenPCC is one of the first 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 designed to solve that problem. It lets companies use large language models without exposing sensitive data - no logging, no retention, no model-side visibility. It has a similar architecture for securing data as Apple’s Private Cloud Compute, but fully open, auditable, and deployable across any cloud, enterprise AI stack, or even your own servers. OpenPCC acts as a 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗹𝗮𝘆𝗲𝗿 between your systems and the AI model: ▪️all data stays encrypted ▪️nothing is stored or learned from ▪️no vendor access ▪️and the entire process can be verified, not just trusted 📍Access the full library on Github: https://lnkd.in/gKxwN_zU ��Learn more about AI security layers: https://lnkd.in/gPj2AQNE It reflects a bigger shift happening in AI right now: enterprises want the power of LLMs, but they need verifiable privacy, not promises. Confident Security - the team behind OpenPCC - is pushing toward that future with a standard any AI company can adopt. If AI is becoming part of your core workflow, understanding and controlling the data path isn’t optional anymore.
Open Source Software Trends
Explore top LinkedIn content from expert professionals.
-
-
In much of the world, digital financial tools are a daily reality—used to process paychecks, pay for dinner, buy groceries, and more. But 1.4 billion adults in low- and middle-income countries still lack access to these tools. This isn’t just an inconvenience for them; it's a barrier to economic growth and empowerment. According to a 2023 UN analysis, digital public infrastructure—including digital ID, payments, and data exchange—could accelerate GDP growth in these countries by 20 to 33 percent. That’s where Mojaloop Foundation comes in: Their open-source software makes it possible for countries to build inclusive digital payment systems that allow anyone with a mobile phone to send and receive money securely, instantly, and affordably. This has the potential to drive economic inclusion—and open the doors to financial freedom—for billions.
-
I think Red Hat’s launch of 𝗹𝗹𝗺-𝗱 could mark a turning point in 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗜. While much of the recent focus has been on training LLMs, the real challenge is scaling inference, the process of delivering AI outputs quickly and reliably in production. This is where AI meets the real world, and it's where cost, latency, and complexity become serious barriers. 𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗶𝘀 𝘁𝗵𝗲 𝗡𝗲𝘄 𝗙𝗿𝗼𝗻𝘁𝗶𝗲𝗿 Training models gets the headlines, but inference is where AI actually delivers value: through apps, tools, and automated workflows. According to Gartner, over 80% of AI hardware will be dedicated to inference by 2028. That’s because running these models in production is the real bottleneck. Centralized infrastructure can’t keep up. Latency gets worse. Costs rise. Enterprises need a better way. 𝗪𝗵𝗮𝘁 𝗹𝗹𝗺-𝗱 𝗦𝗼𝗹𝘃𝗲𝘀 Red Hat’s llm-d is an open source project for distributed inference. It brings together: 1. Kubernetes-native orchestration for easy deployment 2. vLLM, the top open source inference server 3. Smart memory management to reduce GPU load 4. Flexible support for all major accelerators (NVIDIA, AMD, Intel, TPUs) AI-aware request routing for lower latency All of this runs in a system that supports any model, on any cloud, using the tools enterprises already trust. 𝗢𝗽𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘁𝘆 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 The AI space is moving fast. New models, chips, and serving strategies are emerging constantly. Locking into one vendor or architecture too early is risky. llm-d gives teams the flexibility to switch tools, test new tech, and scale efficiently without rearchitecting everything. 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 𝗮𝘁 𝘁𝗵𝗲 𝗖𝗼𝗿𝗲 What makes llm-d powerful isn’t just the tech, it’s the ecosystem. Forged in collaboration with founding contributors CoreWeave, Google Cloud, IBM Research and NVIDIA and joined by industry leaders AMD, Cisco, Hugging Face, Intel, Lambda and Mistral AI and university supporters at the University of California, Berkeley, and the University of Chicago, the project aims to make production generative AI as omnipresent as Linux. 𝗪𝗵𝘆 𝗜𝘁 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 For enterprises investing in AI, llm-d is the missing link. It offers a path to scalable, cost-efficient, production-grade inference. It integrates with existing infrastructure. It keeps options open. And it’s backed by a strong, growing community. Training was step one. Inference is where it gets real. And llm-d is how companies can deliver AI at scale: fast, open, and ready for what’s next.
-
Last night, I was chatting in the hotel bar with a bunch of conference speakers at Goto-CPH about how evil PR-driven code reviews are (we were all in agreement), and Martin Fowler brought up an interesting point. The best time to review your code is when you use it. That is, continuous review is better than what amounts to a waterfall review phase. For one thing, the reviewer has a vested interest in assuring that the code they're about to use is high quality. Furthermore, you are reviewing the code in a real-world context, not in isolation, so you are better able to see if the code is suitable for its intended purpose. Continuous review, of course, also leads to a culture of continuous refactoring. You review everything you look at, and when you find issues, you fix them. My experience is that PR-driven reviews rarely find real bugs. They don't improve quality in ways that matter. They DO create bottlenecks, dependencies, and context-swap overhead, however, and all that pushes out delivery time and increases the cost of development with no balancing benefit. I will grant that two or more sets of eyes on the code leads to better code, but in my experience, the best time to do that is when the code is being written, not after the fact. Work in a pair, or better yet, a mob/ensemble. One of the teams at Hunter Industries, which mob/ensemble programs 100% of the time on 100% of the code, went a year and a half with no bugs reported against their code, with zero productivity hit. (Quite the contrary—they work very fast.) Bugs are so rare across all the teams, in fact, that they don't bother to track them. When a bug comes up, they fix it. Right then and there. If you're working in a regulatory environment, the Driver signs the code, and then any Navigator can sign off on the review, all as part of the commit/push process, so that's a non-issue. There's also a myth that it's best if the reviewer is not familiar with the code. I *really* don't buy that. An isolated reviewer doesn't understand the context. They don't know why design decisions were made. They have to waste a vast amount of time coming up to speed. They are also often not in a position to know whether the code will actually work. Consequently, they usually focus on trivia like formatting. That benefits nobody.
-
The AI ecosystem is becoming increasingly diverse, and smart organizations are learning that the best approach isn't "open-source vs. proprietary"—it's about choosing the right tool for each specific use case. The Strategic Shift We're Witnessing: 🔹 Hybrid AI Architectures Are Winning While proprietary solutions like GPT-4, Claude, and enterprise platforms offer cutting-edge capabilities and support, open-source tools (Llama 3, Mistral, Gemma) provide transparency, customization, and cost control. The most successful implementations combine both—using proprietary APIs for complex reasoning tasks while leveraging open-source models for specialized, high-volume, or sensitive workloads. 🔹 The "Right Tool for the Job" Philosophy Notice how these open-source tools interconnect and complement existing enterprise solutions? Modern AI systems blend the best of both worlds: Vector databases (Qdrant, Weaviate) for data sovereignty, cloud APIs for advanced capabilities, and deployment frameworks (Ollama, TorchServe) for operational flexibility. 🔹 Risk Mitigation Through Diversification Smart enterprises aren't putting all their eggs in one basket. Open-source options provide vendor independence and fallback strategies, while proprietary solutions offer reliability, support, and advanced features. This dual approach reduces both technical and business risk. The Real Strategic Value: Organizations are discovering that having optionality is more valuable than any single solution. Open-source tools provide: • Cost optimization for specific use cases • Data control and compliance capabilities • Innovation experimentation without vendor constraints • Backup strategies for critical systems Meanwhile, proprietary solutions continue to excel at: • Cutting-edge performance for complex tasks • Enterprise support and reliability • Rapid deployment with minimal setup • Advanced features that take years to replicate What This Means for Your Strategy: • Technical Teams: Build expertise across both open-source and proprietary tools • Product Leaders: Map use cases to the most appropriate solution type • Executives: Think portfolio approach—not vendor lock-in OR vendor avoidance The winning organizations in 2025-2026 aren't the ones committed to a single approach. They're the ones with the most strategic flexibility in their AI toolkit. Question for the community: How are you balancing open-source and proprietary AI solutions in your organization? What criteria do you use to decide which approach fits each use case?
-
New research from Massachusetts Institute of Technology! The following is going to change in my opinion as more people and companies realize the advantages of open models: "Closed models dominate, with on average 80% of monthly LLM tokens using closed models despite much higher prices - on average 6x the price of open models - and only modest performance advantages. Frontier open models typically reach performance parity with frontier closed models within months, suggesting relatively fast convergence. Nevertheless, users continue to select closed models even when open alternatives are cheaper and offer superior performance. This systematic underutilization is economically significant: reallocating demand from observably dominated closed models to superior open models would reduce average prices by over 70% and, when extrapolated to the total market, generate an estimated $24.8 billion in additional consumer savings across 2025. These results suggest that closed model dominance reflects powerful drivers beyond model capabilities and price - whether switching costs, brand loyalty, or information frictions - with the economic magnitude of these hidden factors proving far larger than previously recognized, reframing open models as a largely latent, but high-potential, source of value in the AI economy."
-
Open-Source LLMs Are the Future: The Power of Custom AI Over the past two decades, I have seen multiple technology transformations—from proprietary to open-source ecosystems. Whether it was operating systems, web technologies, middleware, or cloud computing, the pattern has remained the same: open systems drive innovation, flexibility, and long-term success. Now, AI is undergoing the same transformation. Open-source LLMs are redefining how businesses adopt and scale AI, providing control, customization, and efficiency that closed systems simply can’t match. Just yesterday, Alibaba's QwQ-32B was released. Within minutes, I had it up and running in my environment, experimenting with its capabilities. That’s the power of open-source innovation—immediate access to cutting-edge AI, no barriers. In my latest blog, I explore: ✅ Why open-source LLMs are the future ✅ How businesses can leverage custom, local AI ✅ The rise of agentic AI and how open models will power it 🔗 Read the full blog for insights. The shift toward self-hosted, domain-specific, and reasoning-driven AI is already happening. Just like Linux, Apache, Java, and Kubernetes shaped modern technology, open LLMs will define the next era of AI-driven applications. Would love to hear your thoughts—where do you see open-source LLMs making the biggest impact? #llm #opensource #innovation
-
The AI landscape is undergoing a fundamental shift, and it’s not the one you think. The competitive frontier isn’t only about building the largest proprietary models. There are two other major trends emerging that haven’t had enough discussion: Open source models have made tremendous strides, especially on cost relative to performance. Compact, fit-for-purpose models can meaningfully out-perform general purpose LLMs on specific tasks and do so at dramatically lower cost. Our Chief Technology Officer and AI team share how we are using open source AI models at Pinterest to achieve similar performance at less than 10% of the cost of leading, proprietary AI models. They also share how Pinterest has built in-house, fit-for-purpose models that are able to significantly outperform leading, proprietary general purpose models. The race to build the largest, most powerful models is profound and meaningful. If you want to see a thriving ecosystem of innovation in an AI-driven world, you should also want to see a thriving open source AI community that creates democratization and transparency. It’s a good thing for us all that open source is in the race. For our part, we’ll continue to share our findings in leveraging open source AI so that more companies and builders can benefit from the democratizing effect of open source AI. https://lnkd.in/gmT6UNXs
-
The most powerful asset at CrewAI isn't our technology per se — it's our open-source community. Why? Because an engaged community creates flywheels that are incredibly hard to replicate: • Innovation Flywheel: Contributors constantly refine the product, accelerating our pace. • Trust Flywheel: Transparency in open-source builds unmatched credibility and loyalty. • Growth Flywheel: Word-of-mouth from passionate users expands our audience organically. Once you've built a strong open-source community, endless opportunities emerge to serve and support it. For example, we recently launched jobs.crewai.com — a dedicated job board connecting companies and professionals passionate about multi-agent AI. That now has HUNDREDS of signed-up CrewAI Engineers from different countries that we are pulling into jobs and opportunities for customers, some already doing work. Building open-source isn't just about giving back—it's about cultivating a thriving ecosystem where everyone grows together. Community-driven innovation is unstoppable. Have you experienced the power of open-source in your organization?
-
For the past two years, the AI-in-code narrative has been about creation: auto-complete, copilots, and agents that promise to ship apps in minutes. For the first time, the story is expanding to include repair. This week, Google DeepMind launched CodeMender, an autonomous AI agent that hunts down vulnerabilities, drafts patches, tests them, critiques itself, and submits fixes to open-source repos. In its early phase, CodeMender has upstreamed 72 security fixes - some in codebases spanning millions of lines, the kind of work that would take human teams months. In other words: we’re teaching machines not just to write, but to atone. Historically - by which I mean, like, last year - cybersecurity was a human sport: a contest of builders and breakers, patchers and penetrators. Now, both sides are automating. - Attackers fine-tune LLMs to find zero-days, turning them into exploit copilots. - Defenders deploy repair agents to find and fix them. The result is an arms race between autonomous systems, unfolding at speeds far beyond human review cycles. Imagine the future: bugs and fixes flying past each other in the night, too fast for any human to follow. Security as algorithmic speed chess. And that sets up the deeper question CodeMender raises: What happens when software starts fixing itself? If an AI can autonomously detect and patch vulnerabilities, we edge toward self-healing infrastructure. But autonomy introduces new fragilities: ▪️ Adversarial corruption. An attacker could poison the model’s feedback loop, tricking its “critique agents” into approving malicious code. The line between “defender” and “attack surface” is one bad update away. ▪️Human deskilling: Overreliance breeds amnesia: “It’s fine, CodeMender will fix it” is a dangerous cultural default. ▪️Accountability black holes: If an AI-generated patch breaks production or causes a breach, who holds the bag - the developer, the model, or Google? Your Chief Risk Officer wants to know. And yet, doing nothing isn’t safer. We are already drowning in insecure code - much of it written by humans on deadlines and LLMs on vibes. The attack surface has outgrown human capacity to defend it. CodeMender represents more than automated patching. It’s a prototype for reflexive software - systems that monitor and adapt their own health. It works 2 ways: → Reactively, patching known vulnerabilities before they’re exploited. → Proactively, refactoring brittle code to eliminate entire classes of vulnerabilities before they occur. That’s not just “AI for cybersecurity.” That’s AI as immune system - a distributed intelligence layer quietly testing, healing, and hardening the world’s codebase. Autonomy in generation led us to creation at scale. Autonomy in repair might just lead us to resilience at scale. In an age where more software is written by models than by people, self-healing becomes survival - the only way to keep the lights on in a digital world built faster than it can be understood.