Blog: DFIR
Digital Forensics acquisition
Introduction
Digital Forensics (DF) acquisition is rapidly changing as per guidance from SANS, NIST and the OS community. In the end forensics needs to become speedier and more effective.
Part of the answer comes from proper acquisition techniques and knowing more efficient ways to extract the data that will yield the most results in our cases.
Gathering triage data is not new, it has been around for many years however most DF analysists are still stuck in the dead box only approach, which is both fine for some rare cases and also when hard drives are not more that 500GB of size. However storage is fast growing to 1TB, 2TB and greater.
I’m also a fan of taking full images, once triage information has been collected as a backup source of data. Bear in mind 96% of data acquired on a disk image is not relevant to an investigation.
Host details
Win 10
2GB ram
125 HDD
Pre 2022 Forensics
FTK Imager of RAM and compression: 45 mins
FTK Image of Disk and compression: 2 hours
Arranging time with the client with minimal disruption to send us the data: 1hr
Transfer of data: 3-5 hours
Triaging the required evidence, artifacts from the data: 3 hours
Analysing the data: X days
This model relies heavily on the client to participate and time in the forensic collection, alternatively wait X hours or days until an investigator arrives to the site
Our 2022 Forensics model
S3 buckets are spun up via a script with only API access. This is a hardened/secured setup with 2 buckets (1 write for collected artifacts/1 read for the tools) no public access and an expiry time in days. 15m
A PowerShell script is provided to the client which can be left to run with no intervention needed. 30m
This script is triggered and performs the below actions: which is completed in 33m
- Maps the S3 buckets with API use
- Downloads the PTP forensic toolkit which includes
- RAM imaging Magnet
- 7zip
- Kape cli (1st disk triage collection tool)
- CyberTriage cli (2nd disk triage collection tool with AI and YARA built in)
- Runs the collections
- Zips the evidence
- Sends evidence back to S3
- Cleans up the host of our files and tools
- Writes back an audit log to S3
Analysing the data : X days
Evidence as seen in the S3 bucket:
Here’s a video of the script running:
The idea for this came from a script developed by the talented @dwmetz.
The original script is 90 lines long. Our adaptations allow for the upload to S3 instead of a network share and also allow for our tool CyberTriage to be included, meaning the script is now 138 lines of beautiful automated forensic work and we will be developing this further as we build up our tool set.
The script and video should be shared as an opensource project supporting others that we will always contribute to and welcome contribution for (probably git)