MIT's CMOS technique creates paired chips with identical “fingerprints” that secure hardware authentication without storing cryptographic keys on external servers. https://lnkd.in/g58VmZwf
MIT CMOS Technique Enables Secure Hardware Authentication
More Relevant Posts
-
Protecting data in use is becoming just as critical as securing data at rest or in transit. With AMD EPYC™ processors featuring SEV‑SNP and Trusted I/O, organizations can build hardware‑based Trusted Execution Environments and enable remote attestation—extending security from CPUs to AI accelerators. A powerful step forward for secure, end‑to‑end AI and data protection. Read more - 🔗 https://bit.ly/3PtaCK3 #AMD #AMDEpyc #ConfidentialComputing #SEVSNP #DataSecurity #AIInfrastructure #TogetherWeAdvance
To view or add a comment, sign in
-
Cloudflare - less reliance on L3 cache layer in the 13th generation FL2 is the rebuilt HTTP/TLS/request-routing layer that all edge traffic hits first (CDN, web application firewall, Zero Trust, Workers). Workers AI requests flow through FL2 for termination/routing, then hit V8 isolates or GPUs for inference—making it "downstream" only in sequencing, not isolation. The L3 cache fix primarily unlocks FL2's proxy scaling across 192 Turin cores, with Workers AI gaining indirectly via faster ingress and shared CPU efficiency. FL2 ended the L3 cache crunch. "L3 cache is the large, last-level cache shared among all CPU cores on the same compute die to store frequently used data. It bridges the gap between slow main memory external to the CPU, and the fast but smaller L1 and L2 cache on the CPU, reducing the latency for the CPU to access data. Some may notice that the 9965 has only 2 MB of L3 cache per core, an 83.3% reduction from the 12 MB per core on Gen 12’s Genoa-X 9684X. Why trade away the very cache advantage that gave Gen 12 its edge? The answer lies in how our workloads have evolved." Cloudflare has migrated from FL1 to FL2, a complete rewrite of our request handling layer in Rust. With the new software stack, Cloudflare’s request processing pipeline has become significantly less dependent on large L3 cache. FL2 workloads scale nearly linearly with core count, and the 9965’s 192 cores provide a 2x increase in hardware threads over Gen 12." FL2 sits in the core HTTP/TLS/request-routing path, so it most directly impacts anything that looks like “normal HTTP traffic through Cloudflare’s edge” — especially CDN, WAF, Zero Trust Gateway, and similar proxy-based products. Workers and Workers AI do benefit, but they are a step downstream of FL2 and are not the primary focus of the “ended the L3 cache crunch” discussion. Workers and Workers AI invoke FL2 indirectly: requests first hit FL2 for TLS termination, routing, and security checks, then fan out to the V8 isolates or GPU schedulers handling compute. The "L3 cache crunch" resolution in FL2 primarily optimizes this high-throughput ingress path for core proxy workloads, where cache contention was most acute due to massive parallelism across 192 cores—less so for the isolate-bound nature of Workers AI.
To view or add a comment, sign in
-
🚀 Master the Maze: A Guide to VM CPU Performance Troubleshooting Ever looked at a slow Virtual Machine and felt like you were staring into a black box? 🌀 Performance troubleshooting in a virtualized environment like VMware vSphere is often a puzzle. You aren’t just looking at a CPU; you’re looking at a complex dance between the Guest OS, the Hypervisor (ESXi), and the physical hardware. This flowchart is a gold standard for diagnosing CPU Contention. It breaks down the noise into four primary "problem" metrics. If your VM is lagging, it’s almost certainly hitting one of these four walls. 🛑 The 4 Primary CPU Killers 1. CPU Ready (%RDY) This is the most common bottleneck. It means the VM has data to process, but the Hypervisor can’t find a physical core to put it on. • The Culprit: Usually CPU Overcommit (too many VMs on one host) or strict VM Limits. 2. CPU Co-stop (%CSTP) This happens specifically with multi-vCPU VMs. If one vCPU is ahead of the others, the hypervisor pauses it to let the others catch up. • The Culprit: Often caused by Usage Disparity or poorly configured vNUMA settings. Sometimes, giving a VM too many CPUs can actually make it slower! 3. CPU Other Wait (%WAIT) The CPU is idle, but not because it wants to be. It’s waiting for something else to finish. • The Culprit: Look away from the CPU and toward Disk Latency, Network congestion, or active Snapshots. 4. CPU Overlap (%OVRLP) This represents time spent by the system (VMkernel) performing tasks on behalf of the VM. • The Culprit: High overhead from services like vSAN, NSX, or driver-level issues. 🛠️ The Fixer’s Checklist To move from "Problem" (Red) to "Optimized" (Yellow/Green), check these three areas: • Configuration: Are your VM Shares and Limits set correctly? Is Hyperthreading enabled at the BIOS level? • Physical Topology: Does your vNUMA match the physical NUMA of the server? • Guest Health: Is the Guest OS Power Management fighting with the ESXi power policy? 💡 Final Thought In virtualization, "more" isn't always "better." Right-sizing your VMs based on actual consumption is the most effective way to keep these metrics in the green. What is your "go-to" metric when a user reports a slow server? Let’s talk shop in the comments! 👇 #VMware #Virtualization #vSphere #CloudInfrastructure #SysAdmin #DevOps #PerformanceTuning #ITOperations #DataCenter
To view or add a comment, sign in
-
-
SmartNIC Offload Modern servers are overloaded — not by applications, but by infrastructure tasks. * ) Networking. * ) Security. * ) Storage processing. * ) Encryption. * ) Virtual switching. All running on the host CPU. SmartNIC / DPU architecture changes that. It offloads complex networking, security, and storage functions from the CPU and standard NIC into a programmable, hardware-accelerated layer. Result? ✅ CPU dedicated to applications ✅ Lower latency ✅ Hardware-isolated security ✅ Higher workload density ✅ Better performance per server
To view or add a comment, sign in
-
-
⚠️ Hostinger VPS Support Transcripts Summary Key Issues Identified: Problem Evidence CPU Steal 94-95% Multiple top outputs across 11 days showing consistent host-level contention 15+ Days Downtime Customer reports business losses, clients lost 8+ Agent Transfers Patrick → Júlia → Nabid → Lukas → Vita → Diana → Multiple others Contradictory Info Some agents said "optimize your code", others admitted node overload Temporary Fixes Only CPU limit removals lasted hours before throttling returned Node Migration Delayed Requested March 15, finally approved March 21 (6 days later) 📝 Reddit Post Draft Title: ⚠️ WARNING: Hostinger VPS - 15 Days of Downtime, 95% CPU Steal, Lost Clients - My Experience Subreddit: r/webhosting / r/VPS / r/hostinger 🚨 My Hostinger VPS Nightmare (March 2026) I'm writing this to save others from the pain I've experienced. I purchased Hostinger's highest-tier VPS plan for my business websites. What followed was 15+ days of complete downtime, lost clients, and the worst support experience of my life. 🔴 The Technical Problem My VPS showed consistent 94-95% CPU Steal Time across 11 days of monitoring. For those who don't know: CPU Steal = Your VM is ready to work, but the physical host is too overloaded to give you CPU time. This is NOT a software issue. This is infrastructure overselling. 1 2 Translation: My server was getting 5% of the CPU I paid for. 📊 Timeline of Hell Date What Happened Mar 15 First reported CPU steal ~94%. Requested node migration. Mar 15-20 Transferred between 6+ agents. Each gave different "solutions". Mar 15 Agent said: "CPU steal is caused by YOUR high usage" (FALSE) Mar 21 Another agent admitted: "Physical node is running at high load average" Mar 21 Finally approved migration after 6 days of begging Mar 26 Issue STILL occurring on "new" node (95% steal returned) 💬 Support Responses That Made It Worse Agent 1: "Our VPS services are self-managed. You should optimize your code." My Response: My bandwidth was at 2.5%. No traffic. How do you optimize code that can't run because the hypervisor won't give it CPU cycles? Agent 2: "CPU steal is not the cause—it appears because of the CPU limit we applied." Reality: The steal existed BEFORE limits were applied. They were blaming the symptom, not the disease. Agent 3: "We don't provide automated node migrations—even for premium VPS plans." Then 6 days later: "We'd like to migrate your VPS to a less loaded node." Which is it?! 💰 Business Impact 2 clients lost due to constant downtime 15 days of zero revenue from affected websites Reputation damage with existing customers Hours spent in live chat instead of running my business 🎯 What They Finally Did (After 6 Days) ✅ Admitted the physical node was overloaded ✅ Approved node migration (1-2 hours downtime) ✅ Removed CPU throttling (temporarily) But: The issue returned within days. Same 95% steal. Same problems. ⚖️ My Verdict
To view or add a comment, sign in
-
VMware's CPU and memory Shares, Reservations, and Limits control resource allocation, with Reservations guaranteeing minimums (e.g., 1GHz), Limits setting maximum caps (e.g., 4GHz), and Shares defining relative priorities (e.g., 2x shares gets 2x resources) during contention, ensuring critical VMs get resources while preventing runaway VMs from starving others. Here's a breakdown: || Shares: __What it is: A relative priority setting, only effective when resources are scarce (contention). __How it works: If VM1 has 2000 shares and VM2 has 500 shares, VM1 gets four times the CPU/memory as VM2 during a crunch. __Default: Normal (e.g., 1000 units). __Use case: Prioritize important VMs like database servers over less critical ones. || Reservations: __What it is: A guaranteed minimum amount of CPU (MHz) or memory (MB) that a VM will always receive. __How it works: The ESXi host reserves this physical resource, subtracting it from its total pool. A VM won't power on if its reservation can't be met. __Use case: Critical VMs (e.g., domain controllers, file servers) that can't tolerate performance dips, as explained in the F5 support portal article and this YouTube video. || Limits: __What it is: An absolute upper cap on how much CPU or memory a VM can use, even if more is available. __How it works: Prevents a single "rogue" VM from consuming all host resources, protecting others. __Default: Unlimited (or 100% of the physical capacity). __Use case: Containing a VM with unpredictable workloads, like a testing environment, from impacting production VMs, notes the Broadcom support portal. || In Summary: __Reserve for guaranteed minimums. __Limit for maximum caps. __Share for priority in contested situations. Expertise: VMware | Omnissa | Microsoft Website: https://www.ITSA.Cloud Channel: https://lnkd.in/g9ZzRzVx Need a Lab? https://lnkd.in/g9DP-nUX
To view or add a comment, sign in
-
VMware's CPU and memory Shares, Reservations, and Limits control resource allocation, with Reservations guaranteeing minimums (e.g., 1GHz), Limits setting maximum caps (e.g., 4GHz), and Shares defining relative priorities (e.g., 2x shares gets 2x resources) during contention, ensuring critical VMs get resources while preventing runaway VMs from starving others. Here's a breakdown: || Shares: __What it is: A relative priority setting, only effective when resources are scarce (contention). __How it works: If VM1 has 2000 shares and VM2 has 500 shares, VM1 gets four times the CPU/memory as VM2 during a crunch. __Default: Normal (e.g., 1000 units). __Use case: Prioritize important VMs like database servers over less critical ones. || Reservations: __What it is: A guaranteed minimum amount of CPU (MHz) or memory (MB) that a VM will always receive. __How it works: The ESXi host reserves this physical resource, subtracting it from its total pool. A VM won't power on if its reservation can't be met. __Use case: Critical VMs (e.g., domain controllers, file servers) that can't tolerate performance dips, as explained in the F5 support portal article and this YouTube video. || Limits: __What it is: An absolute upper cap on how much CPU or memory a VM can use, even if more is available. __How it works: Prevents a single "rogue" VM from consuming all host resources, protecting others. __Default: Unlimited (or 100% of the physical capacity). __Use case: Containing a VM with unpredictable workloads, like a testing environment, from impacting production VMs, notes the Broadcom support portal. || In Summary: __Reserve for guaranteed minimums. __Limit for maximum caps. __Share for priority in contested situations. Expertise: VMware | Omnissa | Microsoft Website: https://www.ITSA.Cloud Channel: https://lnkd.in/gzgW_XaD Need a Lab? https://lnkd.in/g8h_J3C4
To view or add a comment, sign in
-
Day 5 challenge : How to modify the instance/server in AWS Tags and name : we can modify the tags and name AMI(Amazon machine image): we can't modify the AMI(OS ) Instance Type: we can modify the instance type means CPU's & RAM when server in stopped state Key-Pair: Not possible to modify Security Group: if we want connect to the any application through the internet , need to allow the port from security Group inbound rules custom: we can allow the specific persons through their internet IP Anywhere IPV4: anyone can access the application through the internet My IP: only me , can access to add my internet IP EBS(Elastic Block storage): we can modify the storage(increase the RAM ) a. if you want to modify it again , need to wait for 6-hours b. if you want to modify it immediately , we can create volume (attach & deattach ) #AWS #Day05 #Techcareer #Cloudcomputing #LearningJourney Frontlines EduTech (FLM)
To view or add a comment, sign in
-
SysMain was thrashing an HDD so hard I thought the disk was dying. It wasn't. Windows was just trying to be helpful. 💀 Happy PowerShell Thursday (Part 2)! I had a machine with a spinning disk and 64 GB of RAM. The disk was pegged at 100% utilization constantly. Task Manager pointed straight at SysMain (the service formerly known as Superfetch). The system was trying to preload applications into memory based on usage patterns, but on a high-RAM machine with a mechanical drive, it was doing more harm than good. Every idle moment turned into aggressive read/write cycles against a disk that couldn't keep up. The internet's advice? "Just disable SysMain." And for years, that's what people did. But disabling it entirely throws away boot-time prefetching, which actually helps on every hardware profile. The real answer isn't "off" or "on." It's "which mode matches this machine's hardware?" So I wrote a script that figures it out for you. New addition to the High-Performance Windows Toolkit (now 17 scripts). It checks your disk types and RAM, reads the current SysMain config (handling both the EnableSuperfetch and EnablePrefetcher registry keys, because Windows uses both inconsistently), and recommends the correct mode: ✔️ HDD + 16 GB or more RAM: Mode 2 (Boot Only). Stops background thrashing. ✔️ SSD or low RAM: Mode 3 (Full caching). Default is already optimal. No blind disabling. It reads your hardware, explains the recommendation, and asks before changing anything. What's in the Toolkit (17 scripts, 4 tiers): 🟢 Safe Baseline: ✔️ Power plan unlock and DPC latency fix ✔️ Network adapter tuning ✔️ OEM bloatware neutralizer ✔️ System health audit (read-only) 🟡 Advanced Tuning: ✔️ NVMe and GPU PCIe lane diagnostics ✔️ RAID to AHCI migration guide ✔️ Server-grade NVMe driver stack ✔️ WSL/Docker recovery ✔️ SysMain optimization, pagefile fixer, search indexer audit (new) ✔️ Server-mode memory management ✔️ MSI interrupt mode enabler 🟠 Maintenance: ✔️ Visual performance tuner ✔️ Windows debloater and startup killer 👇 Repo link in the comments! 💾 Save this and bookmark the repo. Next time someone says "just disable Superfetch," run the toolkit instead. #PowerShell #SysAdmin #BuildingInPublic #DamnitRay #OpenSource
To view or add a comment, sign in
-
That moment when "Critical" doesn't actually mean critical. Last week, our server's BIOS threw a "Hardware Health: Critical" alert. Panic mode activated. 🚨 But here's the twist — iLO showed every component working perfectly. No failed drives. No overheating. No dead fans. Nothing. So what was it? After digging in, the culprit was surprisingly mundane: RAM wasn't evenly populated across CPU channels. That's it. No hardware failure. No catastrophic event. Just mismatched memory slots quietly making the BIOS scream in silence. The fix? Redistributed RAM symmetrically across both processors — equal channels, balanced load. Alert gone instantly. Here's the real lesson: Modern servers are obsessive about symmetry. BIOS health algorithms aren't just checking if hardware exists — they're checking if it's configured correctly. An asymmetric RAM configuration isn't a failure, but to the BIOS? It's an alarm worth shouting about. Always cross-reference alerts across tools before assuming the worst. iLO ≠ BIOS. They see different layers of truth. 📌 Rule of thumb: When your monitoring tools disagree, trust the one closest to the hardware — but investigate both. Have you ever chased a "critical" alert that turned out to be a config quirk? Drop it in the comments 👇 #SystemEngineering #ServerManagement #DataCenter #HPE #BIOS #Infrastructure #LessonsLearned #SysAdmin
To view or add a comment, sign in
-