TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
Data / Databases / Open Source

PostgreSQL 18 Delivers Significant Performance Gains for OLTP and Analytics

With asynchronous inputs, query optimizations, and expanded indexing functionality, the database is more performant than ever.
May 28th, 2025 6:00am by
Featued image for: PostgreSQL 18 Delivers Significant Performance Gains for OLTP and Analytics

PostgreSQL 18, planned for release in September, has scores of new features and functions, spanning everything from analytics enhancements to support for external indices as extensions. The most notable updates of the relational database system, however, relate to a number of performance gains for what could be a watershed moment for the open source database.

Not only is the latest edition of the engine — which is currently available in Beta 1 — faster than ever before, it’s becoming increasingly applicable to high-volume deployments.

According to PostgreSQL Core Team Member Bruce Momjian, many of these improvements stem from the open source community’s acknowledgement that “more people are using Postgres on huge machines, particularly in the cloud and onsite, where they’re doing huge data warehouse workloads or huge transactional workloads. So, we’ve got bigger and bigger optimizations for those workloads.”

And, as Momjian alluded to, PostgreSQL’s modern use cases not only encompass the traditional OLTP functionality for which it’s known, but also a significant — if not permanent — foray into the realm of data warehousing.

Asynchronous Inputs

The latest edition of the database includes asynchronous inputs, which makes it faster and more suitable for voluminous workloads for transactional and analytical processing. The outputs (or writes) are still synchronous. With asynchronous inputs, the engine can perform additional tasks during and prior to the completion of reads. Before version 18, “If we had to read five blocks we would issue the read and get the answer, issue the read and get the answer, and do that five times,” Momjian explained. “With the new code, we can issue five reads altogether and do other things while waiting for those reads to complete.”

When this functionality is applied at scale, it drastically improves the system’s performance for everything from sizable data warehousing jobs to index lookups for transactional systems. These performance advantages are redoubled for deployments in cloud environments, which typically have “much slower I/O,” Momjian said.

“If you’re in a cloud environment where the I/O latency can be more significant than direct attached storage, that’s a bigger win.” PostgreSQL applications in GCP, AWS, or Azure fit into this category for OLTP and data warehousing jobs. According to Momjian, the database “has a huge amount of features for data warehousing. This is just another example of us getting better in that area.”

Query Optimization

The performance benefits of PostgreSQL 18, the first Beta version of which was released this month, are also attributed to a plethora of advancements in its query optimizer. The engine is able to remove unnecessary steps for processing queries, which increases its speed and efficiency. By removing extraneous processes such as self-joins (which Momjian stated joins a table to itself), the query optimizer streamlines query execution. Other query optimization measures for PostgreSQL 18 pertain to:

  • Parts of SQL Statements: The system is more reliant on indexes when users write OR clauses and other aspects of SQL statements. Thus, when there’s numerous OR clauses, “We can process them more efficiently,” Momjian said. “We can do things like window functions better, avoid sorting when it’s not necessary, and we can process what we call GROUP BY, and HAVING more efficiently. This is closer to data warehousing, because you’re doing a lot of analytics.”
  • Skip Scans: According to Momjian, skip scans allow “more indexes to be used in more cases than they could before.” For example, if an organization has an index on three columns and only references two, it can still use that index.
  • Time-To-Value for Indexes: One of PostgreSQL 18’s foremost query optimization measures allows users to create Generalized Inverted Indexes (GINs) much faster than they previously could, broadening the scope and depth of the utility they supply.

Building Indexes in Parallel

The latest edition of PostgreSQL expedites the time required to create GINs by building them in parallel. Accelerating the development of GINs is critical because they offer an alternative to the traditional B-Tree index that’s commonly used in relational engines. Moreover, they’re applicable to semi-structured data (including the JSON format) and full-text search deployments. The fact that users can construct them in parallel dramatically diminishes the time required to implement them.

“For big workloads, indexes can sometimes take hours to create one,” Momjian said. “Before, very similar to the I/O, we used to do [GINs] one at a time. Now, we can have multiple workers building a single index in parallel. We can use all the CPU and I/O channels we have and build it in parallel with multiple processes.” Among other things, GINs are useful for building complicated indexes, including those for two-dimensional points in space for Geographic Information System (GIS) workloads, which might be required for GPS data.

Index Innovation

Additionally, PostgreSQL 18 includes a newfound ease for working with indices that aren’t natively supported by the database. Organizations can ostensibly develop (or help themselves to newly developed) indexes and simply plug them into the database as extensions. According to Tom Kincaid, EDB SVP of Database Servers and Tools, one of the consequences of this upgrade is “it allows innovation to take place elsewhere.”

This capability furthers the extensibility for which PostgreSQL is frequently lauded while reducing the difficulty of working with new types of data, applications, and technologies. Although developers could implement new types of indices in the past, “It was more difficult before 18,” Kincaid said. “It’s much more simple now, which should enable new types of indexes to be plugged into Postgres.” Consequently, users can continue to foster the culture of innovation that’s foundational to the open source community while quickly testing and deploying new indices and use cases.

Bigger and Broader

The most recent edition of PostgreSQL signals a discernible change in the overall trajectory of the database. It solidifies its worth to both transactional and analytical systems. Moreover, its abundant performance improvements make it more utilitarian than ever for large-scale deployments across these dimensions.

“We always were good at the low to medium workload level,” Momjian said.  “Now, we’re definitely in the high-end processing capability because we’ve got so many companies working on it. All these features I’m talking about, some came from Microsoft. Some came from Amazon. Some came from EDB. Some came from China, different users, and different people in Europe, for example. The point is, as we get more and more popular, we’re getting bigger and bigger workloads.”

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.