When you start analyzing a Linux memory dump using volatility, the first problem you may need to face is choosing the correct memory profile.



In my opinion, the best practice is generate your own profile, using a machine with the same configuration of the target (when available) or (if possible) directly on the target machine (obviously after forensic acquisitions).

A Linux Profile is essentially a zip file with information on the kernel's data structures and debug symbols, used by Volatility to locate critical information and how to parse it once found.

Profile creation is a simple process, and consist of few steps:

Get an updated copy of Volatility:

git clone https://github.com/volatilityfoundation/volatility.git

Create module.dwarf (the kernel's data structures), using the specific tool provided from volatility framework:

$ cd volatility/tools/linux 
$ make 

Make a zip containing module.dwarf and debug symbols in System.map of the running kernel:

zip $(lsb_release -i -s)_$(uname -r)_profile.zip ./volatility/tools/linux/module.dwarf /boot/System.map-$(uname -r)

Finally, copy the zip file into volatility plugin path on your forensic workstation, usually on volatility/volatility/plugins/overlays/linux

andrea@Lucille:~/tmp/volatility$ ./vol.py --info | grep Debian
 Volatility Foundation Volatility Framework 2.6.1
 LinuxDebian_4_9_0-8-amd64_profilex64 - A Profile for Linux Debian_4.9.0-8-amd64_profile x64

Obviously, the whole process may be wrapped in a simple bash script:




References