You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/binary-exploitation/common-binary-protections-and-bypasses/memory-tagging-extension-mte.md
+14-12Lines changed: 14 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ The memory tags are stored in a **dedicated RAM region** (not accessible for nor
32
32
33
33
ARM introduces the following instructions to manipulate these tags in the dedicated RAM memory:
34
34
35
-
```
35
+
```asm
36
36
STG [<Xn/SP>], #<simm> Store Allocation (memory) Tag
37
37
LDG <Xt>, [<Xn/SP>] Load Allocatoin (memory) Tag
38
38
IRG <Xd/SP>, <Xn/SP> Insert Random [pointer] Tag
@@ -43,16 +43,16 @@ IRG <Xd/SP>, <Xn/SP> Insert Random [pointer] Tag
43
43
44
44
### Sync
45
45
46
-
The CPU check the tags **during the instruction executing**, if there is a mismatch, it raises an exception.\
47
-
This is the slowest and most secure.
46
+
The CPU check the tags **during the instruction executing**, if there is a mismatch, it raises an exception (SIGSEGV with `SEGV_MTESERR`) and you immediately know the exact instruction and address.\
47
+
This is the slowest and most secure because the offending load/store is blocked.
48
48
49
49
### Async
50
50
51
-
The CPU check the tags **asynchronously**, and when a mismatch is found it sets an exception bit in one of the system registers. It's **faster** than the previous one but it's **unable to point out** the exact instruction that cause the mismatch and it doesn't raise the exception immediately, giving some time to the attacker to complete his attack.
51
+
The CPU check the tags **asynchronously**, and when a mismatch is found it sets an exception bit in one of the system registers. It's **faster** than the previous one but it's **unable to point out** the exact instruction that cause the mismatch and it doesn't raise the exception immediately (`SIGSEGV` with `SEGV_MTEAERR`), giving some time to the attacker to complete his attack.
52
52
53
53
### Mixed
54
54
55
-
???
55
+
Per-core preferences (for example writing `sync`, `async` or `asymm` to `/sys/devices/system/cpu/cpu*/mte_tcf_preferred`) let kernels silently upgrade or downgrade per-process requests, so production builds usually request ASYNC while privileged cores force SYNC when workload allows it.
56
56
57
57
## Implementation & Detection Examples
58
58
@@ -61,13 +61,13 @@ The kernel allocators (like `kmalloc`) will **call this module** which will prep
61
61
62
62
Note that it'll **only mark enough memory granules** (16B each) for the requested size. So if the requested size was 35 and a slab of 60B was given, it'll mark the first 16\*3 = 48B with this tag and the **rest** will be **marked** with a so-called **invalid tag (0xE)**.
63
63
64
-
The tag **0xF** is the **match all pointer**. A memory with this pointer allows **any tag to be used** to access its memory (no mismatches). This could prevent MET from detecting an attack if this tags is being used in the attacked memory.
64
+
The tag **0xF** is the **match all pointer**. A memory with this pointer allows **any tag to be used** to access its memory (no mismatches). This could prevent MTE from detecting an attack if that tag is being used in the attacked memory.
65
65
66
-
Therefore there are only **14 value**s that can be used to generate tags as 0xE and 0xF are reserved, giving a probability of **reusing tags** to 1/17 -> around **7%**.
66
+
Therefore there are only **14 values** that can be used to generate tags as 0xE and 0xF are reserved, giving a probability of **reusing tags** to 1/17 -> around **7%**.
67
67
68
-
If the kernel access to the **invalid tag granule**, the **mismatch** will be **detected**. If it access another memory location, if the **memory has a different tag** (or the invalid tag) the mismatch will be **detected.** If the attacker is lucky and the memory is using the same tag, it won't be detected. Chances are around 7%
68
+
If the kernel accesses the **invalid tag granule**, the **mismatch** will be **detected**. If it accesses another memory location and the **memory has a different tag** (or the invalid tag) the mismatch will also be detected. If the attacker is lucky and the memory is using the same tag, it won't be detected. Chances are around 7%.
69
69
70
-
Another bug occurs in the **last granule** of the allocated memory. If the application requested 35B, it was given the granule from 32 to 48. Therefore, the **bytes from 36 til 47 are using the same tag** but they weren't requested. If the attacker access**these extra bytes, this isn't detected**.
70
+
Another bug occurs in the **last granule** of the allocated memory. If the application requested 35B, it was given the granule from 32 to 48. Therefore, the **bytes from 36 to 47 are using the same tag** but they weren't requested. If the attacker accesses**these extra bytes, this isn't detected**.
71
71
72
72
When **`kfree()`** is executed, the memory is retagged with the invalid memory tag, so in a **use-after-free**, when the memory is accessed again, the **mismatch is detected**.
73
73
@@ -77,11 +77,13 @@ Moreover, only **`slab` and `page_alloc`** uses tagged memory but in the future
77
77
78
78
When a **mismatch is detected** the kernel will **panic** to prevent further exploitation and retries of the exploit (MTE doesn't have false positives).
79
79
80
+
### Speculative Tag Leakage (TikTag)
81
+
82
+
*TikTag* (2024) demonstrated two speculative execution gadgets (TIKTAG-v1/v2) able to leak the 4-bit allocation tag of any address in <4 seconds with >95% success. By speculatively touching attacker-chosen cache lines and observing prefetch-induced timing, an attacker can derandomize the tag assigned to Chrome processes, Android system services, or the Linux kernel and then craft pointers carrying the leaked value. Once the tag space is brute-forced away, the probabilistic granule reuse assumptions (`≈7%` false-negative rate) collapse and classic heap exploits (UAF, OOB) regain near-100% reliability even when MTE is enabled. The paper also ships proof-of-concept exploits that pivot from leaked tags to retagging fake slabs, illustrating that speculative side channels remain a viable bypass path for hardware tagging schemes.
0 commit comments