Whether you’re a novice user or a system administrator, iptables is a mandatory knowledge!Continue…
Unix terminal is a powerful tool.
I think that a lot of tasks (including my own forensics analysis workflows) can be accomplished more quickly on a “terminal only” environment.
Simplify Linux digital forensics!
In order to use LiMEaide all you need to do is feed a remote Linux client IP address, sit back, and consume your favorite caffeinated beverage.
How does it work?
- Make a remote connection with specified client over SHH
- Transfer necessary build files to the remote machine
- Build the memory scrapping Loadable Kernel Module (LKM) LiME
- LKM will dump RAM
- Transfer RAM dump and RAM maps back to host
- Build a Volatility profile
In order to use LiMEaide you need to resolve some dependencies.
sudo apt-get install python3-paramiko python3-termcolor
sudo apt-get install dwarfdump
- Download LiME v1.7.8
- Extract into
- Rename folder to
More information and downloads
A shortlist of six distribution…guess my favorite!
During a digital forensics analysis, a lot of different tools can be used, and it could be useful use a dedicated linux distribution with all tools already installed and configured.
Here a brief list of my choises.
CAINE offers a complete forensic environment that is organized to integrate existing software tools as software modules and to provide a friendly graphical interface: contains numerous tools that help investigators during their analysis, including forensic evidence collection
DEFT Linux distribution made for evidence collection that comes bundled with the Digital Advanced Response Toolkit (DART) for Windows.
A VMware-based appliance designed for small-to-medium sized digital investigation and acquisition and is built entirely from public domain software, like Autopsy, the Sleuth Kit, the Digital Forensics Framework, log2timeline, Xplico, and Wireshark.
The system maintenance is provided by Webmin.
NST is a Linux distribution that includes a vast collection of best-of-breed open source network security applications useful to the network security professional:
The main intent of developing this toolkit was to provide the security professional and network administrator with a comprehensive set of Open Source Network Security Tools.
A Linux distribution customized in order to perform various forenics tasks like password discovery , social media analysis, data carving, windows registry analysis, malware analysis, log analysis and more.
Security Onion is a special Linux distro aimed at network security monitoring featuring advanced analysis tools:
Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It’s based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools.
The SIFT Workstation is a VMware appliance, preconfigured with the necessary tools to perform detailed digital forensic examination in a variety of settings.
The SIFT Workstation demonstrates that advanced incident response capabilities and deep dive digital forensic techniques to intrusions can be accomplished using cutting-edge open-source tools that are freely available and frequently updated
Can i manage my home-server using Telegram?Continue…
Some days ago i’ve written a post about the “Ultra-Geek” Linux Workstation developed by Joe Nelson.
Reading his post, I found many similarities with the current configuration of my laptop.
Using a direct access to /sys/class/backlightContinue…
Simple, with 5 commands!
Finally, Debian 9 “Stretch” was released in the “stable” branch!
A lot of upgrades, especially in kernel, glibc and other base packages.
Some info from the official wiki (also see the official stretch release notes.):
- Linux kernel series 4.9, GNU libc 2.24.
- Desktop environments: GNOME 3.22, KDE Plasma 5.8, MATE 1.16, Xfce 4.12 and others.
- Programming languages: GCC 6.3, Perl 5.24, Python 3.5, PHP 7.0 and others.
- nftables is available as a replacement for iptables. See this nftables blog post for details.
- The dmesg command requires superuser privileges.
- The X server is no longer setuid, and may be started without root privileges. If the startx command is run as a non-root user, the Xorg.0.log (or Xorg.*.log for alternative displays) file will be written to ~/.local/share/xorg/ instead of /var/log/.
- All MySQL packages have been superseded by equivalent MariaDB packages (e.g. mariadb-server-10.1). The mysql-server and default-mysql-server metapackages are transitional, and will bring in the MariaDB server.
- PHP 7.0 replaces PHP 5.6. There are new metapackages without a version number in them: php-fpm, php-cli, etc. You may use these for future compatibility.
- Most of the library packages with debugging symbols have been moved to a new repository. If you require these packages, you will need to add an entry to your sources.list or sources.list.d. Also note that the package names are different (ending with -dbgsym instead of -dbg).
How to upgrade?
Before you move on with the upgrade, be sure that your current Debian Jessie was fully updated:
# apt-get update
# apt-get upgrade
# apt-get dist-upgrade
and make a backup of your current sources.list:
cp /etc/apt/sources.list /etc/apt/sources.list_backup
Then, you can start with the upgrade.
First, update the package repository:
sed -i 's/jessie/stretch/g' /etc/apt/sources.list
Then, update package index
# apt-get update
and finally execute the below commands to start the upgrade process:
# apt-get upgrade
# apt-get dist-upgrade
Once the process completes, check your Debian version:
# cat /etc/issue
Debian GNU/Linux 9 n l
That’s all folks!
A really interesting series of articles on SANS Digital Forensics Blog
What is EXT4?
EXT4 is a journaling file system for Linux, developed as the successor to ext3: it modifies important data structures of the previous filesystem such as the ones destined to store the file data. The result is a filesystem with an improved design, better performance, reliability and features.
It was accepted as “stable” in the Linux 2.6.28 kernel in October 2008.
The publication of the ‘episodes’ continued since years, and recently Hal has published the part 6, focused on “Directories”.
Here the list of all parts:
20 Dec 2010
EXT4 has moved to 48-bit block addresses. I’ll refer you to the paper cited above for the whys and wherefores of this decision and what it means as far as maximum file system size, etc. What’s really a departure for EXT4 however, is the use of extents rather than the old, inefficient indirect block mechanism used by earlier Unix file systems (e.g. EXT2/EXT3) for tracking file content. Extents are similar to cluster runs in the NTFS file system- essentially they specify an initial block address and the number of blocks that make up the extent. A file that is fragmented will have multiple extents, but EXT4 tries very hard to keep files contiguous.
14 Mar 2011
The EXT4 developers tried very hard to maintain backwards compatibility with the EXT2/EXT3 inode layout. 64-bit timestamps and a completely new file creation timestamp obviously complicate this goal. The EXT4 developers solved this problem by putting the extra stuff in the upper 128 bits of the new, larger 256-bit EXT4 inode.
3. Extent Trees
28 Mar 2011
[…] you can only have a maximum of 4 extent structures per inode. Furthermore, there are only 16 bits in each extent structure for representing the number of blocks in the extent, and in fact the upper bit is reserved (it’s used to mark the extent as “reserved but initialized”, part of EXT4’s pre-allocation feature). That means each extent can only contain a maximum of 2¹⁵ blocks- which is 128MB assuming 4K blocks.
Now 128MB is pretty big, but what happens when you have a file that’s bigger than half a gigabyte? Such a file would require more than 4 extents to fully index. Or what happens when you have a file that’s small but very fragmented? Again, you could need more than 4 extents to represent the collections of blocks that make up the file.
4. Demolition Derby
08 Apr 2011
I got curious about what would happen when I deleted my /var/log/messages file. How does the inode change? What happens to block 131090, which holds my extent tree structure? Well, there’s really only one way to find out: I deleted the file… carefully so I didn’t lose any logging data. In fact, I didn’t just delete the file; I used “shred -u /var/log/messages” to overwrite the data blocks with nulls before unlinking the file. Once the file had been purged, I dumped out the both inode associated with the file as well as block 131090 and took a look at them in my hex editor.
5. Large Extents
22 Aug 2011
[…] you can only have 32K blocks in an extent. Assuming a typical 4K block size, that means you can only have 128MB of data in a single extent. A 4GB file is therefore going to require at least 32 extents, and even that assumes you can find 32 runs of 32K contiguous blocks to use. More likely we’ll have more than 32 extents, some of which don’t use the full 128MB length.
07 Jun 2017
One item I never got around to was documenting how directories were structured in EXT. Some recent research has caused me to dive back into this topic, and given me an excuse to add additional detail to this EXT4 series.
If you go back and read earlier posts in this series, you will note that the EXT inode does not store file names. Directories are the only place in traditional Unix file systems where file name information is kept. In EXT, and the classic Unix file systems it is evolved from, directories are simply special files that associate file names with inode numbers.
Furthermore, in the simplest case, EXT directories are just sequential lists of file entries. The entries aren’t even sorted. For the most part, directory entries in EXT are simply added to the directory file in the order files are created in the directory.
I wish you a good reading!
…in a free collaborative book!
The goal is simple — to share my modest knowledge about the insides of the linux kernel and help people who are interested in linux kernel insides, and other low-level subject matter.
The project is very detailed and already quite complete, here a content summary:
- From bootloader to kernel
- First steps in the kernel setup code
- Video mode initialization and transition to protected mode
- Transition to 64-bit mode
- Kernel decompression
- First steps in the kernel
- Early interrupts handler
- Last preparations before the kernel entry point
- Kernel entry point
- Continue architecture-specific boot-time initializations
- Architecture-specific initializations, again…
- End of the architecture-specific initializations, almost…
- Scheduler initialization
- RCU initialization
- End of initialization
- Start to dive into interrupts
- Interrupt handlers
- Initialization of non-early interrupt gates
- Implementation of some exception handlers
- Handling Non-Maskable interrupts
- Dive into external hardware interrupts
- Initialization of external hardware interrupts structures
- Softirq, Tasklets and Workqueues
- Last part
- Introduction to system calls
- How the Linux kernel handles a system call
- vsyscall and vDSO
- How the Linux kernel runs a program
- Implementation of the open system call
- Clocksource framework
- The tick broadcast framework and dyntick
- Introduction to timers
- Clockevents framework
- x86 related clock sources
- Time related system calls
- Introduction to spinlocks
- Queued spinlocks
- Reader/Writer semaphores
Initial ram disk
- How the kernel is compiled
- Linux kernel development
- Program startup process in userspace
- Write and Submit your first Linux kernel Patch
- Data types in the kernel