Forensic Timeline Creation: my own workflow

Every analyst, during day by day experiences refines its own workflow for timeline creation.

Today i propose mine.

Required tools

Sleutkit

Sleuth Kit is a collection of command line tools that allows you to analyze disk images.

https://www.sleuthkit.org/sleuthkit/

Volatility

The well-known open source memory forensics framework for incident response and malware analysis.

http://www.volatilityfoundation.org/

log2timeline

A tool designed to extract timestamps from various files found on a typical computer system(s) and aggregate them.

https://github.com/log2timeline/plaso


Timeline creation

The traditional timeline analysis is generated using data extracted from the filesystem, enriched with information gathered by volatile memory analisys.
The data are parsed and sorted in order to be analyzed: the end goal is to generate a snapshot of the activity done in the system including its date, the artifact involved, action and source.

Here the steps, starting from a E01 dump and a volatile memory dump:

  1. Extract filesystem bodyfile from the .E01 file (physical disk dump):
    fls -r -m Evidence1.E01 > Evidence1-bodyfile
  2. Run the Timeliner plugin against volatile memory dump using volatility, after image identification:
    vol.py -f Evidence1-memoryraw.001 --profile=Win7SP1x86 timeliner --output=body --outputfile=Evidence1-timeliner.body
  3. Combine the timeliner output file with the filesystem bodyfile
     cat Evidence1-timeliner.body >> Evidence1-bodyfile
  4. Extract the combined filesystem and memory timeline
    mactime -d -b Evidence1-bodyfile 2012-04-02..2012-04-07 > Evidence1-mactime-timeline.csv
  5. Optionally, filter data using grep and applying the whitelist
    grep -v -i -f Evidence1-mactime-timeline.csv > Evidence1-mactime-timeline-final.csv

 


Supertimeline creation

The super timeline goes beyond the traditional file system timeline creation based on metadata extracted from acquired images by extending it with more sources, including more artifacts that provide valuable information to the investigation.

The technique was published in June 2010, on the SANS reading room, in a paper from Kristinn Gudjonsson as part of his GCFA gold certification.

Three simple steps starting from a E01 dump:

  1. Gather timeline data
    log2timeline.py plaso.dump Evidence1.E01
  2. Filter the timeline using psort.py
    psort.py -z "UCT" -o L2tcsv plaso.dump "date > '2012-04-03 00:00:00' AND date < '2012-04-07 00:00:00'" -w plaso.csv
  3. Optionally filter data using grep and applying the whitelist
    grep -v -i -f whitelist.txt plaso.csv > supertimeline.csv

 

In the next article i will propose my method for timeline analysis.


References

Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.