🚗 𝐎𝐩𝐞𝐧 𝐒𝐨𝐮𝐫𝐜𝐞: 𝐓𝐡𝐞 𝐍𝐞𝐰 "𝐒𝐨𝐮𝐥" 𝐨𝐟 𝐒𝐦𝐚𝐫𝐭 𝐂𝐚𝐫𝐬? 𝐋𝐞𝐭’𝐬 𝐃𝐞𝐛𝐚𝐭𝐞! As always, this reflects my personal perspective: the automotive industry is racing toward an open-source future—but 𝐰𝐡𝐲 𝐚𝐫𝐞 𝐜𝐚𝐫 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐧𝐠 𝐬𝐲𝐬𝐭𝐞𝐦𝐬 (𝐎𝐒) 𝐬𝐡𝐢𝐟𝐭𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐩𝐫𝐨𝐩𝐫𝐢𝐞𝐭𝐚𝐫𝐲 𝐛𝐥𝐚𝐜𝐤 𝐛𝐨𝐱𝐞𝐬 𝐭𝐨 𝐜𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐯𝐞 𝐜𝐨𝐝𝐞? From 𝐋𝐢 𝐀𝐮𝐭𝐨'𝐬 𝐇𝐚𝐥𝐨 𝐎𝐒 to 𝐏𝐮𝐡𝐮𝐚 𝐁𝐚𝐬𝐢𝐜 𝐒𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐄𝐚𝐬𝐲𝐗𝐌𝐞𝐧, and 𝐇𝐮𝐚𝐰𝐞𝐢’𝐬 𝐇𝐚𝐫𝐦𝐨𝐧𝐲 𝐎𝐒, Chinese automakers are betting big on open-source solutions. Even global players like 𝐀𝐮𝐭𝐨𝐦𝐨𝐭𝐢𝐯𝐞 𝐆𝐫𝐚𝐝𝐞 𝐋𝐢𝐧𝐮𝐱 and 𝐄𝐁 𝐜𝐨𝐫𝐛𝐨𝐬 are leading the charge. What’s driving this revolution? 🔑 𝐊𝐞𝐲 𝐃𝐫𝐢𝐯𝐞𝐫𝐬: 1️⃣ 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐢𝐨𝐧 𝐚𝐭 𝐒𝐩𝐞𝐞𝐝: Open-source OS taps into global developer ecosystems, accelerating R&D cycles. 𝘌𝘹𝘢𝘮𝘱𝘭𝘦: 𝘏𝘢𝘭𝘰 𝘖𝘚 𝘢𝘥𝘢𝘱𝘵𝘴 𝘵𝘰 𝘯𝘦𝘸 𝘤𝘩𝘪𝘱𝘴 𝘪𝘯 4 𝘸𝘦𝘦𝘬𝘴 𝘷𝘴. 3-6 𝘮𝘰𝘯𝘵𝘩𝘴 𝘧𝘰𝘳 𝘤𝘭𝘰𝘴𝘦𝘥 𝘴𝘺𝘴𝘵𝘦𝘮𝘴. 2️⃣ 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 & 𝐒𝐨𝐯𝐞𝐫𝐞𝐢𝐠𝐧𝐭𝐲: Breaking free from vendor lock-in mitigates risks of remote control and safeguards national tech sovereignty. 3️⃣ 𝐂𝐨𝐬𝐭 & 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: Shared development slashes licensing fees and fosters cross-industry synergy. 4️⃣ 𝗚𝗲𝗼𝗽𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝗧𝗲𝗻𝘀𝗶𝗼𝗻𝘀: "Developed in China" prevents you from entering US market; open source is the exception. 🌍𝐓𝐡𝐞 𝐁𝐢𝐠𝐠𝐞𝐫 𝐏𝐢𝐜𝐭𝐮𝐫𝐞: China’s push for “software-defined vehicles” isn’t just about cars, it’s a strategic chess move in the global tech race. Meanwhile, Tesla’s 2014 patent giveaway set a precedent: 𝐨𝐩𝐞𝐧 𝐞𝐜𝐨𝐬𝐲𝐬𝐭𝐞𝐦𝐬 𝐛𝐫𝐞𝐞𝐝 𝐢𝐧𝐝𝐮𝐬𝐭𝐫𝐲-𝐰𝐢𝐝𝐞 𝐞𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧. But let’s debate: - 𝐈𝐬 𝐨𝐩𝐞𝐧 𝐬𝐨𝐮𝐫𝐜𝐞 𝐭𝐫𝐮𝐥𝐲 𝐭𝐡𝐞 𝙤𝙣𝙡𝙮 𝐩𝐚𝐭𝐡 𝐭𝐨 𝐢𝐧𝐧𝐨𝐯𝐚𝐭𝐢𝐨𝐧, or will fragmentation stall progress? - Can proprietary systems coexist, or will they become relics of the past? - What risks emerge when code is 𝘦𝘷𝘦𝘳𝘺𝘰𝘯𝘦’𝘴 business? 𝐘𝐨𝐮𝐫 𝐭𝐡𝐨𝐮𝐠𝐡𝐭𝐬? 💬 Are we witnessing the rise of a collaborative automotive future—or trading one set of challenges for another? 𝘛𝘢𝘨 𝘢 𝘤𝘰𝘭𝘭𝘦𝘢𝘨𝘶𝘦 𝘸𝘩𝘰’𝘴 𝘥𝘦𝘦𝘱 𝘪𝘯 𝘵𝘦𝘤𝘩, 𝘌𝘝𝘴, 𝘰𝘳 𝘰𝘱𝘦𝘯-𝘴𝘰𝘶𝘳𝘤𝘦 𝘥𝘦𝘣𝘢𝘵𝘦𝘴! #OpenSource #AutomotiveTech #SoftwareDefinedVehicles #Innovation #SmartCars #FutureOfMobility P.S. For more on this, check out 𝗘𝗘𝗪𝗢𝗥𝗟𝗗’𝘀 𝗱𝗲𝗲𝗽 𝗱𝗶𝘃𝗲 https://lnkd.in/eRabvyqi sound off below! Let’s get spicy. 🌶️ 𝐋𝐢𝐤𝐞 | 𝐂𝐨𝐦𝐦𝐞𝐧𝐭 | 𝐑𝐞𝐩𝐨𝐬𝐭 to fuel the conversation! 🔥
Markus Rettstatt’s Post
More Relevant Posts
-
What if “innovation” isn’t about building faster tech — but rewriting the operating systems entire industries run on? I’ve stopped thinking in terms of products and started thinking in terms of infrastructure. Hospitals don’t fail because they lack software — they fail because their systems don’t talk, think, or adapt. Banks don’t get fined because regulations are unclear — they get fined because compliance is an afterthought, not a built-in function. Aviation, real estate, logistics — they’re all operating on patched-together tech built for yesterday. I’m not here to build apps. I’m here to build industry OS platforms that replace broken systems at the core — one sector at a time. And the most disruptive thing about this vision? Each OS is designed to fund the one behind it. No bloat. No burn. No “wait and see.” It's not SaaS. It's System Architecture as Strategy—and Vyxyntra is quietly building the blueprint. The world doesn’t need another startup. It needs a reset button.
To view or add a comment, sign in
-
-
Tech doesn’t disrupt industries overnight. It chips away at the unprepared until momentum flips. By the time it looks like disruption, it’s already too late. Your competitors have built cleaner pipelines, stronger infra, faster shipping cycles. Because it’s not the flashy tools that shift the game, but the compounding effect of small, smart moves made early. Ask yourself: are we building for today, or are we slowly falling behind?
To view or add a comment, sign in
-
**The CTO who said yes to everything built 47 features. The one who said no to 46? She built a billion-dollar product. Steve Jobs nailed it: "Innovation is saying no to a thousand things." But here's what most tech executives miss—he wasn't talking about features. He was talking about identity: Every yes dilutes who you are. Every no sharpens it. 𝗧𝗵𝗲 𝗕𝗿𝘂𝘁𝗮𝗹 𝗧𝗿𝘂𝘁𝗵 𝗔𝗯𝗼𝘂𝘁 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻: Your biggest competitor isn't another company. It's your inability to disappoint people. Think about it. Every transformative product we admire got there by breaking hearts: - iPhone: No keyboard (BlackBerry users revolted) - Tesla: No dealerships (entire industry scandalized) - Netflix: No late fees (Blockbuster laughed... briefly) Innovation requires the courage to be misunderstood. 𝗪𝗵𝘆 𝗧𝗲𝗰𝗵 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 𝗖𝗮𝗻'𝘁 𝗦𝗮𝘆 𝗡𝗼 (𝗔𝗻𝗱 𝗪𝗵𝗮𝘁 𝗜𝘁 𝗖𝗼𝘀𝘁𝘀): **𝗧𝗵𝗲 𝗦𝗵𝗶𝗻𝘆 𝗢𝗯𝗷𝗲𝗰𝘁 𝗦𝘆𝗻𝗱𝗿𝗼𝗺𝗲** "But what if we miss the next big thing?" You will. That's the point. While you're chasing everything, someone else is owning something. **𝗧𝗵𝗲 𝗗𝗲𝗺𝗼𝗰𝗿𝗮𝗰𝘆 𝗗𝗲𝗹𝘂𝘀𝗶𝗼𝗻** "We need buy-in from everyone." No. You need clarity of vision. Consensus is where bold ideas go to die. **𝗧𝗵𝗲 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗙𝗮𝗰𝘁𝗼𝗿𝘆 𝗧𝗿𝗮𝗽** "Our competitors have it, so we need it." Your competitors are drowning in technical debt. Why join them? 𝗕𝗲𝗳𝗼𝗿𝗲 𝘀𝗮𝘆𝗶𝗻𝗴 𝘆𝗲𝘀 𝘁𝗼 𝗮𝗻𝘆𝘁𝗵𝗶𝗻𝗴, 𝗮𝘀𝗸: 1. Does this strengthen our core identity or dilute it? 𝘐𝘧 𝘺𝘰𝘶 𝘩𝘢𝘷𝘦 𝘵𝘰 𝘦𝘹𝘱𝘭𝘢𝘪𝘯 𝘩𝘰𝘸 𝘪𝘵 𝘧𝘪𝘵𝘴, 𝘪𝘵 𝘥𝘰𝘦𝘴𝘯'𝘵. 2. Will our best customers thank us or shrug? 𝘉𝘶𝘪𝘭𝘥 𝘧𝘰𝘳 𝘺𝘰𝘶𝘳 𝘤𝘩𝘢𝘮𝘱𝘪𝘰𝘯𝘴, 𝘯𝘰𝘵 𝘺𝘰𝘶𝘳 𝘤𝘳𝘪𝘵𝘪𝘤𝘴. 3. Can we be #1 in this, or just another player? 𝘚𝘦𝘤𝘰𝘯𝘥 𝘱𝘭𝘢𝘤𝘦 𝘪𝘯 𝘵𝘦𝘤𝘩 𝘪𝘴 𝘧𝘪𝘳𝘴𝘵 𝘱𝘭𝘢𝘤𝘦 𝘪𝘯 𝘪𝘳𝘳𝘦𝘭𝘦𝘷𝘢𝘯𝘤𝘦. 4. What must we sacrifice to do this well? 𝘐𝘧 𝘵𝘩𝘦 𝘢𝘯𝘴𝘸𝘦𝘳 𝘪𝘴 "𝘯𝘰𝘵𝘩𝘪𝘯𝘨," 𝘺𝘰𝘶'𝘳𝘦 𝘭𝘺𝘪𝘯𝘨 𝘵𝘰 𝘺𝘰𝘶𝘳𝘴𝘦𝘭𝘧. 5. Will this matter in 5 years or 5 minutes?** 𝘔𝘰𝘴𝘵 "𝘶𝘳𝘨𝘦𝘯𝘵" 𝘪𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘰𝘯𝘴 𝘢𝘳𝘦 𝘫𝘶𝘴𝘵 𝘍𝘖𝘔𝘖 𝘪𝘯 𝘥𝘪𝘴𝘨𝘶𝘪𝘴𝘦. 𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗦𝘂𝗯𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻: Here's what happens when you start saying no: Your team stops drowning and starts delivering. Your product stops confusing and starts converting. Your company stops following and starts leading. You stop being everything to everyone. You start being everything to someone. Jobs didn't just say no to features. He said no to entire categories, markets, and opportunities. That's not limitation. That's liberation. 🔄 Share this with a tech leader drowning in their own yes's ➕ Follow Shelly Piper for more unfiltered truths about executive leadership
To view or add a comment, sign in
-
-
💎 ‘How might we help teams successfully adopt new tech?’ 💎 There’s one thing that repeatedly comes up as I dig into this topic: the importance of getting buy in from users, getting people on board, getting frontline team members to weigh in, etc., etc. This is an absolutely crucial element of successful tech adoption and I’ve highlighted it myself. 💯 But something has been niggling at me around this idea…🥴 It’s the thought that sometimes the emphasis is too much on making people *feel* that their perspective is valued and not enough on *actually valuing* their perspective. To be clear, both of these things are obviously important. But, if you have access to it, the perspective of the people who will actually use the tech is gold.💡 We should be listening hard and doing our best to make sure that the tech solutions we provide are addressing users’ concerns and meeting their needs. What do you think? Am I overthinking this or can the call for genba perspective be a bit inauthentic at times?
To view or add a comment, sign in
-
-
Innovating in Uncertain Times: Lessons from 2022 - This past year we’ve seen tech giants fall due to mismanagement, recklessness, economic turmoil, a lack of innovation, or some combination of unforeseen circumstances. - Here are five lessons that the tech world has learned this year about disruption, innovation and constant change. https://lnkd.in/gkgAJ5Z2
To view or add a comment, sign in
-
🔷 𝐁𝐮𝐲𝐢𝐧𝐠 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐟𝐫𝐨𝐦 𝐚 𝐯𝐞𝐧𝐝𝐨𝐫 𝐢𝐬 𝐧𝐨𝐭 𝐚 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐲 *𝘢 𝘴𝘮𝘢𝘭𝘭 𝘳𝘢𝘯𝘵, 𝘣𝘰𝘳𝘯 𝘰𝘶𝘵 𝘰𝘧 𝘵𝘰𝘰 𝘮𝘢𝘯𝘺 “𝘈𝘐 𝘴𝘵𝘢𝘤𝘬” 𝘴𝘭𝘪𝘥𝘦𝘴* One thing that’s stayed true throughout my career in technology: relying heavily on external vendors is not a strategy. It’s a dependency that always shows its limits sooner or later. I’ve been seeing more people talking about building their product or business function (entirely) on top of vibe coding and similar tools. It sounds clever at first. But these tools are built on top of large open models (like OpenAI or Anthropic) infrastructure and every token they process costs money. The moment usage grows, so do the bills. For effective LLM usage, context is everything and context is expensive. The better your model understands, the more tokens it consumes. So tools that seem magical at first often start cutting corners later to save costs. That’s when things begin to break and why the bigger your codebase, the more "spaghetti" it becomes. Even these companies themselves are in a fragile position. Their funding rounds look impressive, but compared to the giants they depend on, it’s a rounding error. Some might try to build their own models or lean on open-source alternatives to create a multi-modal platform which is well sufficient specifically for their task and maybe it works for a while. But it rarely changes the fundamentals as they still don’t own the stack. If I were an investor in some of these companies, I’d probably just buy NVIDIA or AMD shares instead. Enterprise segment face the same illusion at a different scale. Many have replaced "vendor lock-in" with "model lock-in" and entire internal roadmaps now depend on whichever LLM provider their pilots started with. It may look like “ai adoption,” but it’s often just another layer of dependency. Sustainable AI strategy means owning part(s) of the stack: data pipelines, evaluation frameworks, orchestration layers, and enough internal capability to switch providers when needed. Otherwise, your innovation budget just turns into someone else’s recurring revenue. I’ve seen this pattern many times before, long before AI boom. Companies that mistake buying tools for building capability. It feels efficient in the beginning. But when complexity rises, the cracks appear. #ai #softwareengineering #productstrategy #scalability #startups #vibecoding #techbusiness #nordictech #tech #llm
To view or add a comment, sign in
-
-
I'd flip part of this: Most enterprises aren't locked into models—they're locked out of competency. The risk isn't using Claude or GPT-5. Well-built agent workflows are increasingly model-agnostic. The risk is treating AI as a vendor product instead of a capability to develop and automate. The real lock-in? Proprietary platforms (looking at you, Microsoft Agent Framework) where your business logic lives in someone else's abstraction layer. Token costs matter at scale, but right now the learning curve matters more. Companies building AI workflows today—even with expensive models—are developing the organizational muscle to capitalize on the next wave. If you're spending tokens but building internal understanding of where AI works (and doesn't) in your workflows, you're investing. If you're just paying a vendor to make the magic happen in a black box, you're renting. And yes, I'm saying this as a consultant—but the ones creating lasting value help clients build capability, not dependency. One approach leaves you ready for the next wave. The other leaves you waiting for permission to innovate.
🔷 𝐁𝐮𝐲𝐢𝐧𝐠 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐟𝐫𝐨𝐦 𝐚 𝐯𝐞𝐧𝐝𝐨𝐫 𝐢𝐬 𝐧𝐨𝐭 𝐚 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐲 *𝘢 𝘴𝘮𝘢𝘭𝘭 𝘳𝘢𝘯𝘵, 𝘣𝘰𝘳𝘯 𝘰𝘶𝘵 𝘰𝘧 𝘵𝘰𝘰 𝘮𝘢𝘯𝘺 “𝘈𝘐 𝘴𝘵𝘢𝘤𝘬” 𝘴𝘭𝘪𝘥𝘦𝘴* One thing that’s stayed true throughout my career in technology: relying heavily on external vendors is not a strategy. It’s a dependency that always shows its limits sooner or later. I’ve been seeing more people talking about building their product or business function (entirely) on top of vibe coding and similar tools. It sounds clever at first. But these tools are built on top of large open models (like OpenAI or Anthropic) infrastructure and every token they process costs money. The moment usage grows, so do the bills. For effective LLM usage, context is everything and context is expensive. The better your model understands, the more tokens it consumes. So tools that seem magical at first often start cutting corners later to save costs. That’s when things begin to break and why the bigger your codebase, the more "spaghetti" it becomes. Even these companies themselves are in a fragile position. Their funding rounds look impressive, but compared to the giants they depend on, it’s a rounding error. Some might try to build their own models or lean on open-source alternatives to create a multi-modal platform which is well sufficient specifically for their task and maybe it works for a while. But it rarely changes the fundamentals as they still don’t own the stack. If I were an investor in some of these companies, I’d probably just buy NVIDIA or AMD shares instead. Enterprise segment face the same illusion at a different scale. Many have replaced "vendor lock-in" with "model lock-in" and entire internal roadmaps now depend on whichever LLM provider their pilots started with. It may look like “ai adoption,” but it’s often just another layer of dependency. Sustainable AI strategy means owning part(s) of the stack: data pipelines, evaluation frameworks, orchestration layers, and enough internal capability to switch providers when needed. Otherwise, your innovation budget just turns into someone else’s recurring revenue. I’ve seen this pattern many times before, long before AI boom. Companies that mistake buying tools for building capability. It feels efficient in the beginning. But when complexity rises, the cracks appear. #ai #softwareengineering #productstrategy #scalability #startups #vibecoding #techbusiness #nordictech #tech #llm
To view or add a comment, sign in
-
-
Everyone says “hardware is hard.” Turns out, software just burns more cash to get worse multiples. Venture’s in its IBM era: risk-off, narrative-obsessed, allergic to contrarian bets. There are a pocket of hardware founders who raise less, build real things, and will double your multiple. VCs, stop funding automation graveyards--The margin’s in the service layer. Deep tech investing isn't broken, just mispriced. Grant Gregory and Will Quist explore what the data actually says: https://lnkd.in/gQNFpbs8
To view or add a comment, sign in
-
How to Make a Technology From Concept to Real-World Application Turning a great idea into something real is a thrilling challenge today. This technology development process needs careful planning and smart action at every step.The path from idea to market involves many key stages. Each one needs special knowledge and a clear plan to beat challenges.Those who succeed know it's not just about the tech. Understanding the market, what users need, and having a soli
To view or add a comment, sign in
More from this author
Explore related topics
- Open Source Software for Accelerating Innovation
- How Software-Defined Vehicles Are Transforming Mobility
- Open Source Ecosystem Building
- Innovation Trends in Automotive Smart Vehicle Development
- Future of Open Source in Enterprise
- Collaborative Software Development Ecosystems
- How Automotive Companies can Innovate
- Autonomous Vehicle Ecosystem
- How Software Drives Automotive Innovation
- The Impact of Digital Innovation on Automotive Industry
Markus Rettstatt as always a terrific post and perspective! I would like to point out that when using OSS that a robust, safety-certified base like QNX remains critical for success in automotive. QNX with it's microkernel architecture isolates mission-critical processes from non-critical ones, delivering the deterministic performance and reliability demanded by ISO 26262. By layering open-source frameworks atop QNX, automakers can rapidly innovate while preserving a proven safety foundation. Not adopting this hybrid approach poses real dangers: unknown code provenance, insufficient real-time guarantees, and vulnerabilities creeping in from massive open-source codebases. In a domain where even minor software failures can threaten lives, such risks are simply unacceptable. One major OSS failure could ruin the party for everyone. Red Hat is already trying to push Linux beyond where it should be. As vehicles become ever more connected and software-defined, it’s imperative we don’t compromise on the core architecture that protects lives. In short, QNX enables the open-source revolution to thrive without sacrificing the rock-solid reliability the automotive world demands.