Gros

Exploitation notes

IP control

- saved rip
- ptrs in got table
- malloc_hook (triggered f.e. with printf with large string), free_hook etc
- fini array
- vtable in _IO_FILE (stdin, stderr...), fe. flush in stdout vtable
- classes vtables
- atexit, onexit
- handler for custom format in printf

Bugs

Techniques & tricks

Kernel

#Getting root

#Build & debug

C/C++

inline auto which(U1 T1::) { return “gcc”; } inline auto which(U1 T2::) { return “icc”; } inline auto which(U2 T1::) { return “msvc”; } inline auto which(U2 T2::) { return “clang”; }

int main() { A a; using T = T2; using U = U2; puts(which(a.operator U T::*())); }


### Heap notes

#### #Structures

Arena - contiguous region of memory (132 KB), one arena may have many heaps Heap - single contiguous memory region holding (coalesceable) malloc_chunks. It is allocated with mmap() and always starts at an address aligned to HEAP_MAX_SIZE. One heap is in exactly one arena.

For 32 bit systems: Max numbers of arenas = 2 * number of cores. SIZE_SZ = 4 For 64 bit systems: Max numbers of arenas = 8 * number of cores. SIZE_SZ = 8

CHUNKS

struct malloc_chunk { INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if P == 0). / INTERNAL_SIZE_T size; / Size in bytes, including overhead. 3LSB: N,M,P/ / A(NON_MAIN_ARENA), M(IS_MMAPPED), P(PREV_INUSE) */

struct malloc_chunk* fd;         /* double links -- used only if free. */
struct malloc_chunk* bk;

/* Only used for large blocks: pointer to next larger size.  */
struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */
struct malloc_chunk* bk_nextsize; };

taken from malloc.c:

used chunk chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of previous chunk, if unallocated (P clear) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of chunk, in bytes |A|M|P| mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | User data starts here… . . . . (malloc_usable_size() bytes) . . | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | (size of chunk, but used for application data) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of next chunk, in bytes |A|0|1| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

free chunk chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of previous chunk, if unallocated (P clear) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ head:' | Size of chunk, in bytes |A|0|P| mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Forward pointer to next chunk in list | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Back pointer to previous chunk in list | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Unused space (may be 0 bytes long) . . . . | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ foot:’ | Size of chunk, in bytes | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of next chunk, in bytes |A|0|0| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

A bit - set if chunk belongs to thread arena N bit - set if chunk was mapped (other bits are ignored then) P bit - set if previous chunk is in use, otherwise previous chunk size is correct

BINS

128 bins total 64 bins of size 8 32 bins of size 64 16 bins of size 512 8 bins of size 4096 4 bins of size 32768 2 bins of size 262144 1 bin of size what’s left

not existing bin(1) unsorted bin(1):

fastbins(10):

smallbin(62)

large bins(63)

#Houses

------------------------
The House of Prime: Requires two free's of chunks containing attacker controlled size fields, followed by a call to malloc.

The House of Mind: Requires the manipulation of the program into repeatedly allocating new memory.

The House of Force: Requires that we can overwrite the top chunk, that there is one malloc call with a user controllable size, and finally requires another call to malloc.

The House of Lore: Again not applicable to our example program.

The House of Spirit: One assumption is that the attacker controls a pointer given to free, so again this technique cannot be used.

The House of Chaos: This isn't actually a technique, just a section in the article :)
------------------------

#Fastbin

* abusing the fastbin freelist (two times same value in fastbins list):
    + require:
        + double-free on chunk (fastbin size)
        + variable near write_where with value close to fastbin size 

    + malloc two chunks (same size, say 0x40)
    + free chunk1, chunk2, chunk1 (double free vuln);   # now fastbin list is HEAD->chunk1->chunk2->chunk1
    + d = malloc(size); malloc(size);                   # now the list is HEAD->chunk1 and we control content (fd, bk) of chunk1
    + write_where = 0x40                                # (or semething near fastbin size)
    + *d = &write_where - SIZE_SZ; 
    + malloc(size); *ptr = malloc(size);                # ptr is  &write_where + SIZE_SZ
    + fake chunk on stack must be in correct fastbin, othwerwise you will get "malloc(): memory corruption (fast)"

        that means: (write_where>>4)-2 must be equal to fastbin index(idx) (counting from 0)
        that means: write_where == 0x40 -> previous mallocs were with same size

        fastbins:
        32:  0x1ce4000 ◂— idx==0x0
        48:  0x0
        64:  0x40 <-- write_where==0x40, so it must be: idx==2 (2 == 0x40>>4 - 2)
        80:  0x0
        96:  0x0
        112: 0x0
        128: 0x0
* unsafe unlink
    + require:
        + free on corrupted chunk (overwritten prev_size + one bit)
        + pointer to chunk at know position

    chunk0(smallbin_size)
     ______
    |prev_size                                      fake_chunk inside chunk0
    |size                                            ______ 
    |fd                                             |prev_size == 0
    |bk                                             |size == 0
    |fd_nextsize = &chunk0_ptr - 3*SIZE_SZ          |fd: fake_chunk->fd->bk == fake_chunk
    |bk_nextsize = &chunk0_ptr - 2*SIZE_SZ          |bk: fake_chunk->bk->fd == fake_chunk
    |__________

    chunk1(smallbin_size) <- overflow
     _________
    |prev_size = smallbin_size (normally it would be smallbin_size+2*SIZE_SZ, but now it points to fake_chunk)
    |size &= ~1 (mark chunk0 as free, do not change size value except LSB)
    |fd
    |bk
    |________

    + malloc two chunks of size smallbin_size (NOT fastbin, >=0x80)
    + setup fake chunk
    + overflow in chunk1 header
    + now free chunk1, so that consolidate backward will unlink fake_chunk overwriting chunk0_ptr (now it points to fake_chunk->fd so &chunk0_ptr - 3*SIZE_SZ)
    + chunk0_ptr[3] = write_where
    + now chunk0_ptr points to write_where
    + chunk0_ptr[0] = write_what

#House of Spirit

* House of Spirit (free overwritten pointer)
    + require:
        + pointer to controlled memory
        + free on that pointer
        + malloc of fastbin size

    fake_chunk0
     ________
    |prev_size
    |size = fastbin_size (so next chunk is fake_chunk1), M and P bits must be zero
    |fd
    |bk
    |fd_nextsize
    |bk_nextsize
    |________

    fake_chunk1
     ________
    |prev_size
    |size = 0x2240 (above 2*SIZE_SZ, below av->system_mem (128kb by default for the main arena))
    |fd
    |bk
    |fd_nextsize
    |bk_nextsize
    |________

    + malloc whatever to setup heap
    + make two fake chunks
    + overwrite some pointer with &fake_chunk0+2*SIZE_SZ
    + free it, overwritten pointer will be in fastbins
    + malloc of size fastbin_size (or something near) will return &fake_chunk0[2]

#House of Force

* House of Force (overwrite top chunk's size field)
    + require:
        + known address of top chunk
        + overwritte top chunk's size
        + malloc of arbitrary size
        
    + set top chunk's size to -1 (0xffffffff or something big)
    + compute evil_size = write_where - sizeof(char *)*4 - top_chunk_address (top_chunk_address == prev_size)
    + be carefull with signed evil_size
    + malloc(evil_size) (it will return top_chunk_address+2*sizeof(char *) and set new top_chunk_address to write_where-2*sizeof(char *))
    + malloc(whatever) (it will return write_where)
    + profit

#Glibc for heap

cat makefile
GLIBC = /path/glibc_versions/2.25
CC = gcc

.PHONY: all
all: test

test: test.c
	${CC} \
	-Wl,-rpath=${GLIBC}:\
	${GLIBC}/math:\
	${GLIBC}/elf:\
	${GLIBC}/dlfcn:\
	${GLIBC}/nss:\
	${GLIBC}/nis:\
	${GLIBC}/rt:\
	${GLIBC}/resolv:\
	${GLIBC}/crypt:\
	${GLIBC}/nptl_db:\
	${GLIBC}/nptl:\
	-Wl,--dynamic-linker=${GLIBC}/elf/ld.so \
	-o test test.c

clean:
	rm -f test test.o
----------

# compile
gcc -Wl,-rpath=${GLIBC}:${GLIBC}/math:${GLIBC}/elf:${GLIBC}/dlfcn:${GLIBC}/nss:${GLIBC}/nis:${GLIBC}/rt:${GLIBC}/resolv:${GLIBC}/crypt:${GLIBC}/nptl:${GLIBC}/nptl_db:-Wl,--dynamic-linker=${GLIBC}/elf/ld.so -o test test.c

# run
LD_PRELOAD="$GLIBC/libc.so:$GLIBC/elf/ld.so:$GLIBC/nptl/libpthread.so" ./test

# gdb ./test
# set environment LD_PRELOAD=/path/glibc_versions/2.25/libc.so:/path/glibc_versions/2.25/elf/ld.so:/path/glibc_versions/2.25/nptl/libpthread.so
# set auto-load safe-path /path/glibc_versions/2.25/nptl_db/
# set libthread-db-search-path /path/glibc_versions/2.25/nptl_db/

# target record-btrace -> race conditions debugging

# patch ld in binary
patchelf --set-interpreter `pwd`/ld.so.2 --set-rpath `pwd` binary

Windows

WinDbg <-> IDA addressation: