𝗣𝘆𝘁𝗵𝗼𝗻 𝗰𝗮𝗻 𝗳𝗶𝗻𝗮𝗹𝗹𝘆 𝘂𝘀𝗲 𝗮𝗹𝗹 𝘆𝗼𝘂𝗿 𝗖𝗣𝗨 𝗰𝗼𝗿𝗲𝘀. For years, the Global Interpreter Lock (𝗚𝗜𝗟) was Python’s biggest limitation for CPU-bound tasks. Even if you had 8 cores, only one thread could truly execute Python code at a time, others just waited for their turn. Now, with 𝗣𝘆𝘁𝗵𝗼𝗻 𝟯.𝟭𝟰, that changes. The new free-threaded (no-GIL) interpreter finally lets multiple threads run Python code simultaneously, even for CPU-heavy workloads. So what actually changed inside 𝗖𝗣𝘆𝘁𝗵𝗼𝗻 to make this possible? Let’s look at the differences. 𝗢𝗯𝗷𝗲𝗰𝘁 𝗹𝗶𝗳𝗲𝘁𝗶𝗺𝗲: • 𝗢𝗹𝗱: One thread at a time, refcounts were safe by default. • 𝗡𝗲𝘄: Refcounts are atomic. Common objects like None and True are immortal, no locking, no slowdown. 𝗟𝗼𝗰𝗸𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆: • 𝗢𝗹𝗱: One giant GIL for everything. • 𝗡𝗲𝘄: Many tiny locks. Each subsystem guards itself like type caches, allocators, GC. Threads finally run side by side. 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻: • 𝗢𝗹𝗱: Stop the world, clean up, resume. • 𝗡𝗲𝘄: Each generation has its own lock. GC can quietly run while your code keeps executing. 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗿 𝘀𝘁𝗮𝘁𝗲: • 𝗢𝗹𝗱: Shared global state like builtins, modules, caches all tangled together. • 𝗡𝗲𝘄: Each interpreter has isolated state. Subinterpreters can run truly in parallel. 𝗖 𝗲𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻𝘀: • 𝗢𝗹𝗱: Every extension assumed the GIL existed. • 𝗡𝗲𝘄: A new free-threaded C API + atomic helpers make extensions thread-safe again. 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲: • 𝗢𝗹𝗱: One thread per process, no matter how many cores. • 𝗡𝗲𝘄: Slightly slower single-thread, but real parallel speedups for CPU-bound workloads. Now, Python can finally breathe across all cores. — 𝐏𝐲𝐂𝐨𝐝𝐞𝐓𝐞𝐜𝐡 #Python
How Python 3.14's new features boost CPU performance
More Relevant Posts
-
🚀 𝐏𝐲𝐭𝐡𝐨𝐧 𝟑.𝟏𝟒 - 𝐓𝐡𝐞 𝐧𝐞𝐰 𝐅𝐫𝐞𝐞-𝐓𝐡𝐫𝐞𝐚𝐝𝐢𝐧𝐠 𝐦𝐨𝐝𝐞𝐥 𝐚𝐧𝐝 𝐰𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 🚀 If you’ve been writing Python for a while, you’ve probably bumped into the limitations of the Global Interpreter Lock (GIL). The GIL means that even on a multi-core machine, threads in one Python process can’t execute Python bytecode truly in parallel. Only one thread runs at a time ! With Python 3.14, the “𝒇𝒓𝒆𝒆-𝒕𝒉𝒓𝒆𝒂𝒅𝒆𝒅” or “no-GIL” build is officially supported. That means you can opt-into a version of CPython where the GIL is disabled and threads can truly run in parallel across multiple CPU cores. ⚠️𝐖𝐡𝐚𝐭’𝐬 𝐭𝐡𝐞 𝐆𝐈𝐋? In previous Python versions, the Global Interpreter Lock (GIL) ensured only one thread could really execute Python bytecode at a time, so even on multi‐core hardware, threads couldn’t fully run in parallel. 💡𝐖𝐡𝐚𝐭 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐰𝐢𝐭𝐡 𝐟𝐫𝐞𝐞-𝐭𝐡𝐫𝐞𝐚𝐝𝐢𝐧𝐠? - Threads can now truly run in parallel on multiple cores when using a free-threaded build of Python (python3.14t) - This opens up real gains for CPU-bound, multithreaded Python workloads. - Existing Python libraries written in thread-safe way, should work without modification and utilize all cores of CPU. - C extension that has not been explicitly marked as free-thread-safe, it will re-enable the GIL for the lifetime of that process. 🔍𝐁𝐨𝐭𝐭𝐨𝐦 𝐥𝐢𝐧𝐞 If your Python apps care about multi-core performance or threading, this update is worth watching (or even experimenting with). It’s a strong signal that Python is leveling up its concurrency game, and making it easier for developers to build more scalable, high-performance systems. #Python #Python314 #Concurrency #Multithreading #GIL #SoftwareEngineering #DevCommunity
To view or add a comment, sign in
-
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝟯.𝟭𝟰: 𝗙𝗶𝗻𝗮𝗹𝗹𝘆, 𝗬𝗼𝘂 𝗖𝗮𝗻 𝗗𝗶𝘀𝗮𝗯𝗹𝗲 𝘁𝗵𝗲 𝗚𝗜𝗟! Big news for Python devs: Python 3.14 lets you turn off the Global Interpreter Lock (GIL) - a historic step in the language. --- What’s the GIL? The Global Interpreter Lock (GIL) prevents true multi-threading in standard Python: even with multiple threads, only one executes Python code at a time. It’s been a pain for devs building high-performance or parallel apps. What’s new in Python 3.14? • You can now run Python without the GIL! • Multiple threads can finally run real Python code in parallel on multiple CPU cores. Which means... • Multi-threaded code (e.g., concurrent web servers, data crunching, agent apps) gets a major speedup -> no more C extensions/hacks needed. • You can better use multi-core hardware: just like Java, C++, and Go. --- How to use it (very simply): • With Python 3.14 the default interpreter build remains the traditional GIL-enabled version, so existing Python code and libraries should work as before. • If you’re working on new parallel or CPU-bound threading workloads, you can optionally install or build the free-threaded (GIL-disabled) version of Python. Caveats: Not all third-party libraries are yet fully compatible with the GIL-free build. Also, single-threaded workloads may run slightly slower in this build, so the benefit is primarily for multi-threaded, core-saturating tasks. --- Overall: Python 3.14 lets you choose:- classic simplicity or full-power concurrency. It makes Python more future-proof for fast, modern applications. ♻️ Share it with your network if you find it useful, and follow Mayank Sultania for more practical AI tips. Video by: DailyDoseofDS.com #Python #Concurrency #GIL #Python314 #Developers #Performance
To view or add a comment, sign in
-
What is Static Method in Python?👇 A static method is a method that belongs to a class, but does not need access to: the instance (self), or the class itself (cls). 🧩It is created using the @staticmethod decorator. 🧩Belongs to the class, not to instances : It’s defined inside a class, but doesn’t depend on object data. It’s shared across all instances. 🧩Does not take self or cls as the first argument: Since it doesn’t work with instance or class attributes, no self or cls parameter is needed. 🧩Can be called using class name or object: You can call it using ClassName.method() or object.method() — both work. 🧩Acts like a normal function inside a class: It behaves like a regular function but is grouped logically under a class. 🧩Can be called without creating an object: You can use it directly via the class — no need for object = Class() first. ⚙️E.g. class Car: @staticmethod def is_valid_license(license_number): return len(license_number) == 10 print(Car.is_valid_license("MH12AB1234")) # True print(Car.is_valid_license("ABC")) # False #Python #CodingTips #ObjectOrientedProgramming #PythonDevelopers #LearnPython #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Beyond the GIL: Python 3.14’s Hidden Performance & DevX Upgrades The core challenge for Python has always been true multi-core utilization. While the optional "free-threaded" build addresses this for CPU-bound work, these other features provide direct, immediate performance and quality-of-life wins today: Native Subinterpreters (PEP 734): This is the true multi-core parallelism engine. It allows multiple, isolated Python interpreters to run in parallel within a single process. The Shortcut: Use the new concurrent.interpreters module for parallel processing with less memory overhead than traditional multiprocessing. Think isolated, concurrent ML model serving. Template Strings (T-Strings): A new string literal type that defers evaluation, making them ideal for safer string processing. Actionable Tip: Use T-Strings for securely generating SQL queries or configuration files, drastically reducing the risk of injection attacks compared to raw f-strings in certain contexts. Incremental Garbage Collector: Say goodbye to those annoying, split-second application freezes! The new GC works in small, quick steps, resulting in much smoother latency for web servers and GUIs. This isn't just a new Python version; it's a re-architecture for the multi-core, high-concurrency world. Which one of these 3.14 features will have the biggest impact on your current project's system design? Share your thoughts! 👇 #SystemDesign #AI #Tech #CloudComputing #DeveloperTools #LearningJourney
To view or add a comment, sign in
-
-
🎥 Python just removed the GIL after 33 years. It will change everything for multi-threaded code Python 3.14 ships with a free-threaded version that unlocks true multi-core performance. No more single-threaded bottlenecks. No more heavy multi-processing overhead. (🎥 Full breakdown with code examples in my new video, link in comments👇🏽) 𝗧𝗵𝗲 𝗯𝗮𝗰𝗸𝘀𝘁𝗼𝗿𝘆 Python was created in 1991. Multi-core processors didn't exist until 2001 (IBM's Power4). When they added threading support in 1992, they used the Global Interpreter Lock (GIL) to keep things simple and thread-safe. The GIL is a single lock that only allows one thread to execute at a time. Launch 10 threads or 1 thread? Same performance on a 16-core machine. 𝗪𝗵𝘆 𝘄𝗲 𝗸𝗲𝗽𝘁 𝗶𝘁 👉🏽 Low overhead for single-threaded programs 👉🏽 Simple memory management (reference counting) 👉🏽 Compatibility with non-thread-safe C libraries 👉🏽 Easy to implement (one lock vs fine-grained locking) 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 Your multi-threaded code runs sequentially. Thread 1 executes, releases the GIL, Thread 2 executes, releases the GIL. Zero parallelism. You had to use multi-processing with massive memory overhead and separate interpreters. 𝗣𝘆𝘁𝗵𝗼𝗻 𝟯.𝟭𝟰 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝘁𝗵𝗲 𝗴𝗮𝗺𝗲 Two versions ship together: - Standard Python (with GIL) - Free-threaded Python (no GIL) Same code. Actual parallelism. Multi-core performance with threading. 𝗛𝗼𝘄 𝘁𝗼 𝘁𝗿𝘆 𝗶𝘁 Install: - Install with UV in seconds: - uv python install 3.14t Run your code: - No GIL: PYTHON_GIL=0 python script[.]py - With GIL: PYTHON_GIL=1 python script[.]py That's it. Same threading code, multi-core performance unlocked. 𝗣𝗲𝗿𝗳𝗲𝗰𝘁 𝗳𝗼𝗿 - Data processing pipelines - Scientific computing - Image/video processing - Any CPU-bound workload After 30+ years, Python finally leverages all your cores with threading. No more choosing between simple threading and heavy multi-processing. 𝘈𝘳𝘦 𝘺𝘰𝘶 𝘶𝘴𝘪𝘯𝘨 𝘊𝘗𝘜-𝘪𝘯𝘵𝘦𝘯𝘴𝘪𝘷𝘦 𝘵𝘢𝘴𝘬𝘴 𝘪𝘯 𝘗𝘺𝘵𝘩𝘰𝘯? 𝘛𝘩𝘪𝘴 𝘪𝘴 𝘺𝘰𝘶𝘳 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦 𝘶𝘯𝘭𝘰𝘤𝘬. 🎥 Full breakdown with code examples in my new video (link in comments). 👇🏽 #python #ai #performance #opensource
To view or add a comment, sign in
-
-
Python’s runtime is slow. Interpreter + single-threaded design impose a hard ceiling. Bytecode is executed by the interpreter, not the CPU. Performance “fixes” like Cython help by sprinkling C into Python, but they never replace the core limitations. Python is fast to ship. Developer time beats runtime speed. A project that takes hours or days to wrestle through Rust’s borrow checker or C++’s complexity can be delivered in Python in minutes. For most workloads, delivery velocity matters more than peak performance. Execution speed is often irrelevant. In ML, the heavy lifting happens in C/CUDA; Python just orchestrates. For automation, data tasks, bots, and internal tooling, Python is “fast enough.” End users rarely notice runtime loss. Static typing enables codebase sprawl. TypeScript showed what happens when you make a scripting language scalable: people use it to build systems it wasn’t designed for. Optional Python typing risks the same trap. Practical boundary: • Use Python when velocity and flexibility matter • Use Rust/C++ when latency, throughput, and predictability matter Optimizing Python for raw speed is usually wasted effort. The win is rapid iteration, not theoretical maximum throughput. What do you think, does Python belong in production webservers?
To view or add a comment, sign in
-
-
🔥 Python 3.14 introduces true multithreading — the GIL is finally optional! After decades of limitation, Python 3.14 now ships with a free-threaded (no-GIL) build, officially enabling parallel execution of Python threads across multiple CPU cores. 🧠 What This Means The Global Interpreter Lock (GIL) prevented multiple threads from executing Python bytecode at the same time — effectively serializing all CPU-bound code. With the new build: • The GIL is removed • Reference counting is atomic & thread-safe • Multiple threads can run concurrently and in parallel ⚙️ Quick Example import threading def cpu_heavy(): sum(range(50_000_000)) threads = [threading.Thread(target=cpu_heavy) for _ in range(8)] for t in threads: t.start() for t in threads: t.join() 🧩 In Python ≤3.13 → Only one thread executes at a time 🚀 In Python 3.14 (no-GIL build) → All 8 threads run in true parallelism 📈 Key Takeaways • ✅ True Multithreading: Threads run on different cores • ⚙️ Optional Build: GIL build still exists; use the --disable-gil version for free-threaded mode • 🧩 Extension Update Needed: Libraries like NumPy and Pandas need thread-safe adjustments • ⚡ Performance: Slight single-thread overhead, massive gains for multi-core workloads 🔮 What’s Next This is officially Phase II of Python’s no-GIL adoption. The community will refine it further before it becomes the default build. Once that happens — Python moves into the same parallel performance league as Java and C++. 🐍 Python 3.14 isn’t just another release — it’s the start of Python’s true multithreading era. #Python #NoGIL #Python314 #Multithreading #Concurrency #AI #MachineLearning #ParallelComputing #Developers #Tech
To view or add a comment, sign in
-
'Long live the GIL, You will be missed' The option to disable the GIL in Python 3.14 is a potential game-changer for performance. You can now optionally disable the Global Interpreter Lock (GIL) by using the `-X nogil` flag. 🔒 With the GIL Think of the GIL as a master key for your program. Only one thread can hold this key to execute Python code at any given time. This creates a bottleneck for CPU-bound tasks, even on multi-core processors. 🚀 Without the GIL (-X nogil) The "master key" is gone. Multiple threads can now execute Python code on separate CPU cores simultaneously. This provides true parallelism and can significantly speed up your CPU-bound code. The Big Catch: Race Conditions The GIL inadvertently protected us from many concurrency bugs. Without it, we are fully responsible for ensuring thread safety. This code is NOT safe in no-GIL mode: import threading n = 0 # shared counter def increment(): global n # This looks like one step, but it's three: # 1. Read the value of n # 2. Add 1 to the value # 3. Write the new value back to n n += 1 # Two threads will race to update n, and # updates will be lost. The final result can potentially be incorrect. ✅ Fix: Use a Lock You must explicitly protect shared data with a `threading.Lock` to make the update atomic, meaning uninterruptible. import threading n = 0 lock = threading.Lock() # Lock to protect data def safe_increment(): global n with lock: # Only one thread can be in this block at a time n += 1 #Python #GIL #Performance #Concurrency
To view or add a comment, sign in
-
How to build-a-quick SYN scanner with Scapy (Python) Scapy gives full control (construct raw packets). Must run as root. Example python code: #!/usr/bin/env python3 # scapy_syn_scan.py — simple SYN scanner (educational) import sys from scapy.all import IP, TCP, sr1, conf conf.verb = 0 def syn_scan(target, start=1, end=1024, timeout=1): open_ports = [] for port in range(start, end+1): ip = IP(dst=target) syn = TCP(dport=port, flags="S") resp = sr1(ip/syn, timeout=timeout) if resp is None: # no response (filtered or dropped) continue if resp.haslayer(TCP): if resp[TCP].flags == 0x12: # SYN+ACK open_ports.append(port) # send RST to close rst = TCP(dport=port, flags="R", seq=resp.ack) sr1(ip/rst, timeout=timeout) return open_ports if __name__ == "__main__": if len(sys.argv) < 4: print("Usage: sudo python3 scapy_syn_scan.py <target> <start> <end>") sys.exit(1) target = sys.argv[1] start = int(sys.argv[2]) end = int(sys.argv[3]) print(f"Scanning {target} ports {start}-{end}") ports = syn_scan(target, start, end) print("Open ports:", ports) NOTE: This is for educational purposes only.
To view or add a comment, sign in
-
The Secret Life of Python: The Integer Cache - Why Small Numbers Share Identity Timothy was debugging a puzzling issue when he called Margaret over. "Look at this," he said, pointing at his terminal. "These two comparisons should behave the same way, but they don't." # Small numbers a = 256 b = 256 print\(a is b\) # True # Larger numbers x = 257 y = 257 print\(x is y\) # False - Wait, what?! Margaret smiled. "You've discovered Python's integer cache. Welcome to one of Python's most surprising optimizations - and a perfect lesson in the difference between identity and equality." "First," Margaret said, "let's be crystal clear about what is actually checks." def demonstrate\_identity\_vs\_equality\(\): """ == checks VALUE equality \(are contents the same?\) is checks IDENTITY \(are they the same object in memory?\) """ # These are always equal in value a = 257 b = 257 print\(f"a == b: \{a == b\}"\) # True - same value print\(f"a is b: \{a is b\}"\) # False - different objects! # Check their memory addresses print\(f"id\(a\): \{id\(a\)\}"\) https://lnkd.in/gwXMMZ-f
To view or add a comment, sign in