Saturday, 17 August 2024 marked an important date for operators of #CriticalInfrastructure in Australia - the compliance deadline for #CyberSecurity framework. Under the #SOCI Rules (LIN 23/006) 2023, if you are an operator of critical infrastructure in Australia, you are required to establish and maintain compliance with a cyber security framework. The rules in LIN 23/006 (dated 16 February 2023) apply 6 months after passing (17 August 2023), then allow 12 months for responsible entities to be compliant. These rules cover operators of 13 types of critical infrastructure assets: broadcasting, domain name system; data storage or processing, electricity, energy market operator, gas, hospital; food and grocery, freight infrastructure, freight services, liquid fuel, financial market infrastructure, and water. Operators of these assets are required to be maintaining one of the following Critical Infrastructure Risk Management Program (#CIRMP) frameworks: 🛡 ISO 27001 🛡 ASD Essential 8 🛡 Framework for Improving Critical Infrastructure Cybersecurity (US NIST) 🛡 CMMC (US DoD) 🛡 AESCSF Framework Core (AEMO) A reminder too that CIRMP annual reports for the 2023-24 Australian financial year are due by 28 September 2024!
IT Infrastructure Upgrades
Explore top LinkedIn content from expert professionals.
-
-
Europe just defined how AI must be secured On 15 Jan, the European Telecommunications Standards Institute (ETSI) published a standard, EN 304 223, defining baseline cybersecurity requirements for AI models and systems. ➡️ A common set of AI cybersecurity controls, usable across jurisdictions, vendors, supply chains. Why this matters now Traditional cybersecurity was built for software & networks. AI changes the attack surface: ▫️ training data can be poisoned ▫️ models can be manipulated or obfuscated ▫️ prompts can be indirectly injected ▫️ behaviour can drift in invisible ways ➡️ EN 304 223 explicitly names these risks, treating them as security failures. How this takes effect EN 304 223 is already being pulled into procurement processes, security questionnaires, internal audits, vendor due diligence, insurance reviews. With the EU AI Act, high-risk AI systems will need to demonstrate compliance through conformity assessment either via internal control with robust technical documentation, or through assessment by a notified body. ➡️ EN 304 223 is the operational “how” that law and auditors will rely on. The real breakthrough: lifecycle security The standard defines 13 principles and 72 trackable requirements, organised across 5 phases of the AI system lifecycle: 1️⃣ secure design 2️⃣ secure development 3️⃣ secure deployment 4️⃣ secure maintenance 5️⃣ secure end of life ➡️ Retraining a model = redeploying a system from a security standpoint. AI security becomes a continuous operational discipline. Accountability made operational EN 304 223 assigns accountability across 3 technical roles: ✔️ developers ✔️ system operators ✔️ data custodians ➡️ AI risk lives between teams. This standard makes ownership explicit. The target: production AI EN 304 223 applies to deep neural networks and GenAI models already embedded in products, services, and operational decisions. Academic or research environments are excluded. ➡️ This standard is about AI that is live, scaled, and consequential, particularly in finance, healthcare, and critical infrastructure. What “compliance” means Complying with legal, audit, procurement, and insurance expectations using EN 304 223 as evidence: mapping controls across the lifecycle and ownership across roles. What Boards and executives should do now 1️⃣ Mandate an AI inventory: What AI is live, where, doing what, using which data pipelines, supplied by whom. 2️⃣ Assign named accountability across the lifecycle: Align to the standard’s role logic per system. 3️⃣ Require an AI security evidence pack per high-impact system, mapped across its lifecycle. 4️⃣ Decide your assurance route early. For high-risk systems plan for internal control vs notified body assessment. The bigger signal EU is turning AI security into auditable infrastructure. Trustworthy AI is becoming a standard of execution. For companies operating globally, proof of AI security is becoming the baseline. #AI #GenAI #AIGovernance #AISecurity #Boardroom
-
Hotel tech adoption isn’t stuck because hoteliers are scared of spending. In fact, hotels spend many millions every decade on renovations, replacing furniture, fixtures, equipment, and upgrading HVAC systems without blinking. So it’s not about cost aversion. It’s about the perceived return on tech (and more crucially) the pain of getting there. Subscribe to my newsletter for weekly updates: https://lnkd.in/eHP5ida2 Unlike a room refresh, which shows immediate impact on guest satisfaction and average daily rate, tech upgrades are an invisible improvement. And they come with short-term disruption which is extremely real: retraining staff, adjusting workflows, and dealing with the inevitable problems when new systems go live. The gains are long-term and hard to quantify; the pain is immediate and very measurable. I’ve been a hotel managers going through PMS change, and the disruption is real (even if the system was better). Every team (front desk, housekeeping, reservations, night audit) needs training. Productivity takes a hit. Tempers flare. Guests feel it. And the manager has to hold it all together with a “smile”. Also, B2B software rarely “just works.” And unlike consumer tech, hospitality systems need niche functionality to handle the unique complexity that every hotel seems to have. The tools are built to be feature rich, not user friendly (yes, there are exceptions). In my opinion change management is the real blocker to hotel tech adoption. We don’t need better sales decks or flashier features. We need tech that’s easier to implement, intuitive to use, and respectful of the chaos that is daily life in a hotel. We need training tools that don’t assume everyone works 9-to-5 from a desk. It’s understandable that hotels to favor the furniture upgrade over the software one. One brings immediate revenue. The other brings disruption with a promise of future optimizations, it is the emotional perspective, the rational one is that tech is directing the majority of our lives today. If only we could build systems that were so easy to use that there was not training needed.
-
I’ve been asked this question countless times: "Our data center servers are over 6 years old, consume over 66% of DC energy, but provide only ~7% of compute. How can we consolidate more efficiently?" The reality is staggering: 💡 40% of the world's servers are outdated (6+ years old). ⚡ They consume 66% of data center energy. 📉 Yet, they only deliver 7% of the total compute power! The solution? Modernizing with high-efficiency architectures like AMD EPYC (Turin) 5th Gen ✅ 7:1 Server Consolidation – One EPYC server can replace up to 7 aging Intel Cascade Lake servers. ✅ Up to 192 Zen 5 cores – Maximizing compute density per rack. ✅ Lower Power, Higher Performance – Cutting energy costs while boosting workloads. 💬 Data center consolidation isn’t just about performance—it’s about sustainability and TCO. What are your biggest challenges in DC modernization? Let’s discuss. #Server #Consolidation #Efficiency #Sustainability #AMD #EPYC #Innovation
-
🚀 Strengthening Cybersecurity with Zero Trust: Key Highlight from the FY26 Federal Cybersecurity Priorities 🚀 The Office of Management and Budget (OMB) and the Office of the National Cyber Director (ONCD) have released their FY26 Cybersecurity Priorities, focusing on enhancing the Nation's cybersecurity posture through strategic investments and initiatives. Here's a deep dive into the crucial aspects of their Zero Trust strategy: 🔹Modernizing Federal Defenses: The U.S. Government is transitioning towards fully mature Zero Trust architectures. This involves prioritizing technology modernization, implementing encryption and multifactor authentication, and leveraging government-managed cybersecurity shared services. 🔹Increasing Maturity of Information Systems: Agencies are required to submit updated Zero Trust implementation plans within 120 days, documenting current and target maturity levels in each pillar for all high-value assets and high-impact systems. These plans will be reviewed by OMB, ONCD, and CISA. 🔹Reducing Risk and Enhancing Security: Budget submissions must demonstrate how agencies are reducing risks by increasing the maturity of information systems based on the pillars outlined in the Cybersecurity and Infrastructure Security Agency’s (CISA) Zero Trust Maturity Model. Quotes from the Memo: 🔹"Agency investments should lead to demonstrable improvements reflected by agency FISMA reporting or similar metrics." "🔹Agencies with federated networks should prioritize investments in department-wide, enterprise solutions to the greatest extent practicable in order to further align cybersecurity efforts, ensure consistency across mission areas, and enable information sharing." 🔹"Within 120 days of the date of this memorandum, agencies must submit an updated zero trust implementation plan to OMB and ONCD." By aligning with CISA's Zero Trust Maturity Model and leveraging these strategic priorities, federal agencies can significantly enhance their cybersecurity posture, ensuring robust defense mechanisms and resilience against evolving threats. #Cybersecurity #ZeroTrust #Technology #CISA #Innovation #DigitalTransformation
-
Over the weekend, I read Google's paper on how they use AI for internal code migrations—and it’s packed with insights on how to approach legacy system modernization. I’ve attached the paper for those interested, but here’s how I believe some of these strategies can help us tackle complex modernization challenges: 🔎 1. Accelerating Legacy System Modernization Google leverages Large Language Models (LLMs) to automate large-scale code migrations, significantly reducing manual effort and speeding up projects. Applying similar AI-driven approaches can streamline the modernization of legacy systems, cutting through complexity and outdated code. 🔎 2. Combining AI with Proven Engineering Tools By blending LLMs with Abstract Syntax Tree (AST)-based tools, the ensure accuracy and scalability in their code transformations. This hybrid method shows how AI and traditional engineering techniques can work together to deliver safe and reliable modernization. 🔎 3. Reusable Migration Workflows Google created modular, reusable workflows that make onboarding and executing new migration tasks faster and more efficient. Developing similar toolkits for legacy systems could simplify recurring modernization steps and adapt to complex scenarios. 🔎 4. Measuring Success by Business Impact Google focuses on measurable outcomes, like a 50% reduction in project time, rather than just the volume of AI-generated code. This business-aligned metric highlights the importance of demonstrating clear ROI in technology transformation projects. 🔎 5. Safe and Scalable Rollouts Their phased deployment strategy ensures AI-driven changes are rolled out safely, minimizing disruption. Adopting a controlled rollout approach can help manage risks and ensure stability when modernizing critical systems. 🔎 6. Strategic Use of AI Models Google balances using custom fine-tuned models and general-purpose tools depending on the task. This approach offers valuable insight into when to invest in specialized AI solutions versus using adaptable off-the-shelf models. 📌 The Big Picture: Legacy system modernization is about combining AI-driven efficiency with engineering best practices to deliver faster, safer, and more impactful business transformations. 📎 I’ve attached the paper if you’d like to explore it further! #LegacyModernization #GenAI #BusinessInnovation — Enjoyed this post? Like 👍, comment 💭, or repost ♻️ to share with others.
-
Infrastructure-as-Code is the cleanest path to Compliance-as-Code. Each Terraform module or CloudFormation stack defines a control: Encryption, tagging, logging. - Git repos give us immutable evidence. Who changed what, when, and why. - Policy-as-code gates in CI/CD stop non-compliant resources before they hit prod. - Automated drift detection alerts when reality drifts from the declared standard. The payoff? Audits shift from screenshot scavenger hunts to a simple git log. Our DevOps pipelines should be ready to double as our compliance repo. When we treat infrastructure definitions as living controls, we unlock a tamper-proof audit trail. Exactly what future audits will demand. #GRCEngineering
-
Sustainability Integration in Digital Strategy 🌎 As sustainability expectations increase, companies must address both the environmental impact of their digital infrastructure and the role technology plays in driving broader decarbonization across their operations and value chains. BCG proposes a two-pronged approach to guide this integration: Sustainable Tech and Tech for Sustainability. The first focuses on reducing the carbon footprint of IT itself, while the second leverages digital tools to reduce emissions across the business and ecosystem. Key actions to decarbonize IT include measuring emissions from IT operations, optimizing their footprint, and sourcing hardware and services with sustainability criteria in mind. These steps lay the foundation for a greener digital infrastructure. In parallel, technology can be used to advance sustainability across operations. Measuring and optimizing emissions beyond IT, and actively engaging in ecosystem-level collaboration, can help companies drive systemic change using digital enablers. Both dimensions are supported by three strategic phases: defining purpose and vision, setting priorities, and enabling the organization. Together, they provide a clear path for integrating sustainability into digital strategy with structure, accountability, and impact. #sustainability #sustainable #business #esg
-
The major tech companies - Amazon Web Services (AWS), Google, Meta Facebook and Microsoft - invested over $65 billion in CAPEX this quarter (Q3) on cloud and AI infrastructure. Year-to-date spending exceeds $171 billion, setting records for quarterly investment: Amazon: $22.79 billion (+79%), marking a new high. Spending primarily targets AWS and fulfillment. Amazon expects around $75 billion in CAPEX for 2024, with further increases projected for 2025. Google: $13.06 billion (+62%), matching nearly all of 2017’s annual spend in one quarter. Investments focus 60% on servers and 40% on data centers. Meta: $9.2 billion (+36%), slightly below guidance due to timing, with increased spending expected in Q4 and 2025 for infrastructure growth. Microsoft: $20 billion (+79%), equivalent to its full-year 2020 spend, aimed at AI-driven cloud capacity. Microsoft’s enterprise offering, Fabric, now has over 16,000 customers, including 70% of the Fortune 500. Detailed Company Quotes: Amazon: - “We expect to spend approximately $75 billion in CAPEX in 2024. The majority supports AWS’s growing AI demand, alongside infrastructure in North America and internationally. Investments in fulfillment and transportation networks aim to enhance delivery speeds and reduce service costs.” - “Many of these assets, such as data centers, have useful lives of 20 to 30 years.” - "Our AI capacity demand currently exceeds available infrastructure." - "CAPEX growth is particularly driven by generative AI, with anticipated further spending in 2025." Google: - "We expect Q4 CAPEX to match Q3 levels and project further increases in 2025, though not as substantial as from 2023 to 2024." - "In Q3, approximately 60% of CAPEX went to servers, with 40% allocated to data centers and networking equipment." Meta: - “Our full-year 2024 CAPEX range is now $38-40 billion, slightly up from prior guidance, with significant infrastructure growth anticipated in 2025.” - "The expected increase in Q4 CAPEX will be partly due to server spend and data center investments, with delayed cash outflows from server deliveries appearing in Q4." - “We’re training Llama 4 on a cluster of over 100,000 H100 GPUs—one of the largest known setups.” Microsoft: - “Half of our cloud and AI spending is on long-lived assets supporting monetization over the next 15 years, with the remainder for CPUs and GPUs to meet current demand.” - "Demand, especially for AI inference, continues to exceed capacity." - "We don’t sell raw GPUs externally due to our own high demand and adverse selection in the current market." - "Our Fabric platform now has over 16,000 customers, including 70% of the Fortune 500, with Copilot Stack sitting atop Fabric to provide advanced enterprise infrastructure." #ai #digitalinfrastruture
-
🛡️ The Quantum Clock is Ticking quietly: Is Your Financial Infrastructure Ready? The financial industry is built on a foundation of digital trust, currently secured by #cryptographic standards like RSA and ECC. However, the rise of Cryptographically Relevant Quantum Computers (CRQC) poses an existential threat to this foundation. As we navigate this transition, here are 3 key pillars from the latest Mastercard R&D white paper that every financial leader must prioritize: 1. Addressing the 'Harvest Now, Decrypt Later' (HNDL) Threat 📥 Malicious actors are already intercepting and storing sensitive #encrypted data today, intending to decrypt it once powerful quantum computers are available. Financial Use Case: Protecting long-term assets such as credit histories, investment records, and loan documents. Unlike transient transaction data (which uses dynamic cryptograms), this "shelf-life" data requires immediate risk analysis and the adoption of quantum-safe encryption for back-end systems. 2. Quantum Resource Estimation & The 10-Year Horizon ⏳ While a CRQC capable of breaking RSA-2048 in hours might be 10 to 20 years away, the migration process itself will take years. Financial Use Case: Developing Agile Cryptography Plans. Financial institutions should set "action alarms" for instance, once a quantum computer reaches 10,000 qubits, a pre-prepared 10-year migration plan must be triggered to ensure infrastructure is updated before the "meteor strike" occurs. 3. Hybrid Implementations: The Bridge to Security 🌉 The transition won't happen overnight. The paper highlights the importance of Hybrid Key Encapsulation Mechanisms (KEM), which combine classical security with PQC. Financial Use Case: Enhancing TLS 1.3 and OpenSSL 3.5 protocols. By implementing hybrid models now, banks can protect against current quantum threats (like HNDL) while maintaining compatibility with existing classical systems, ensuring a smooth and safe transition. The Bottom Line: A reactive approach is no longer an option. Early adopters who evaluate their data's "time value" and begin the migration today will be the ones to maintain resilience and protect global financial assets tomorrow. #QuantumComputing #PostQuantumCryptography #FinTech #CyberSecurity #DigitalTrust #MastercardResearch