Skip to content

runtime: clear() is slow for maps with big capacity and small number of items #70617

Open
@valyala

Description

@valyala

The issue

The following pattern is frequently used in order to avoid excess memory allocations by re-using the map:

func f() {
  m := make(map[string]int)

  for {
    addSomeItemsToMap(m)
    useMap(m)

    // clear the map for subsequent re-use
    clear(m)
  }
}

It has been appeared that clear(m) performance is proportional to the number of buckets in m. The number of buckets can grow significantly at addSomeItemsToMap(). After that the performance of clear(m) can slow down significantly (and forever), even if only a few items are added into the map on subsequent iterations.

See https://philpearl.github.io/post/map_clearing_and_size/ for more details.

The solution

Go runtime must be able to switch between the algorithm, which unconditionally clears all the buckets in m, and the algorithm, which clears only the buckets, which contain at least a single item, depending on the ratio between the number of items in the map and the number of buckets in it. This should improve performance of clear(m) in the pattern above when every iteration can store widely different number of items in m.

Metadata

Metadata

Assignees

Labels

NeedsInvestigationSomeone must examine and confirm this is a valid issue and not a duplicate of an existing one.Performancecompiler/runtimeIssues related to the Go compiler and/or runtime.release-blocker

Type

No type

Projects

Status

Todo

Relationships

None yet

Development

No branches or pull requests

Issue actions