Blog: DFIR

Investigating volatile data with advanced memory forensics tools – part 1

Luke Davis 24 Oct 2024

TL;DR

  • Memory forensics enhances investigations by analysing volatile data (in RAM) unavailable in disk forensics.
  • Key insights from memory include running processes, network connections, encryption keys, and user activity, vital for real-time investigations.
  • Smaller memory images (4-32 GB) offer faster analysis compared to large disk images (250+ GB).
  • Critical artifacts like malware, passwords, encryption keys, and user command history are often found in memory but not all of the time on disks.
  • Tools like Volatility 2 and Volatility 3 are crucial for parsing memory dumps, with Volatility 3 offering improved performance and accuracy.
  • Volatility 2 remains popular due to its extensive plugin ecosystem and established workflows, while Volatility 3 excels in modern OS support and symbol-based analysis.
  • Combining Volatility 2 and 3 ensures comprehensive and reliable memory forensics across different systems and datasets.
  • Automation can streamline investigations, reducing analysis time and improving client response.

Introduction

In this two post series I want to highlight how memory forensics plays a crucial role in enhancing forensic investigations. Specifically by providing access to volatile data that cannot be retrieved from storage devices like hard drives.

By analysing a system’s RAM, investigators can uncover crucial evidence such as running processes, network connections, encryption keys, and user activity that occurred in real-time before the system was powered off. This allows for the recovery of hidden or deleted data, detection of malware, and identification of unauthorized access, thereby complementing traditional forensic techniques and providing a more comprehensive understanding of cyber incidents and criminal activities.

Whilst this blog does not intend to go into any detail into some of the most popular tools available to analyse memory, nor a deep dive into analysis techniques it is intended to provide high level information about some significant enhancements to memory forensics in the last few years and the difference in tooling. This also covers only three memory forensic tools, many others are available.

Why is memory forensics so important during an investigation?

Before we delve into understanding different ways to analyse memory, utilising forensic techniques, it’s a good idea to understand how memory aids forensics investigators.

Traditional disk forensic analysis is moving towards getting smaller and smaller triage collection images for rapid analysis, predominantly before waiting for large disk images to be transferred and processed. Memory images are often far smaller in size, typically between 4 and 32 GB in comparison to the smallest hard disks typically used which are 250GB in size.

Memory forensics can reveal several unique artifacts that are not typically found in disk forensics or have additional data than disks hold such as timestamps, as it deals with volatile data that exists only in a system’s RAM.

Here are some key artifacts that can be found in memory forensics:

Running processes and threads
Memory forensics allows investigators to analyse the list of currently running processes and threads, including hidden or malicious processes that may not be visible on the disk.

Active network connections
Information about active network connections, including IP addresses, open ports, and communication sessions, can be found in memory. This is crucial for understanding real-time network activity, such as ongoing attacks or unauthorised data exfiltration.

Encryption keys and decrypted data
Encryption keys and decrypted data, which may only exist in memory while a program is running, can be captured during a memory dump. This is essential for accessing encrypted files or communications that remain inaccessible through disk forensics.

In-memory malware and rootkits
Malware and rootkits that operate entirely in memory (fileless malware) may leave no traces on the disk. Memory forensics can detect these threats by analysing suspicious code, injected DLLs, or abnormal memory behaviour.

User activity and interaction
Memory can hold information about user activity, such as open files, clipboard contents, typed commands, or form inputs, which may not be saved to disk. This is helpful for reconstructing a timeline of user actions.

Passwords and authentication tokens
Plaintext passwords, authentication tokens, and session credentials used by applications and operating systems can often be extracted from memory, providing access to protected systems and accounts.

Volatile system artifacts
Temporary data such as the contents of system caches, volatile registry entries, and data structures like the clipboard and volatile environment variables are often only present in memory and not written to disk.

Loaded drivers and kernel modules
Memory forensics can reveal the currently loaded device drivers and kernel modules, including any malicious or unauthorised code that could be used for privilege escalation or system compromise.

Command history
Command-line history, especially for commands executed in live sessions (e.g., from the terminal or command prompt), can be found in memory even if it is not written to disk logs.

System and application state
The current state of the operating system and running applications, including unsaved documents, open network sessions, and in-progress tasks, can be captured, providing a snapshot of the system’s activity at a specific moment in time.

These artifacts, often transient and not saved to disk, make memory forensics an invaluable tool for uncovering evidence that disk forensics alone might miss.

Forensic analysis of RAM

For my forensic analysis lab, I have a Windows 10 VM with 8GB of RAM. During my analysis I will be walking though some of the different tools used to process and analyse the memory, looking at some of the limitations and benefits.

I have executed a malicious file that I created some years ago which has the hash value below. For this analysis I have changed a number of things such as the IP it makes a GET request to:

b505a07e2c29d2ac37dc5fe55c26ccd62e838ca9a12fdb26c7b35b9b3b30982d

VirusTotal – File – b505a07e2c29d2ac37dc5fe55c26ccd62e838ca9a12fdb26c7b35b9b3b30982d

Tool: Volatility

Volatility is a popular memory forensics framework used for analysing memory dumps. The release of Volatility 3 introduced several significant changes and improvements over Volatility 2. Here’s a breakdown of the key differences between Volatility 2 and Volatility 3. Architecture and Codebase:

  • Volatility 2 is written in Python 2.7, which reached its end-of-life in 2020. Its codebase is structured around plugins and relies on heuristics and signatures for parsing memory structures.
  • Volatility 3 is rewritten in Python 3, which provides better support for modern systems and ongoing updates. The new architecture is more modular and cleaner, focusing on a more object-oriented design. It uses a more robust and formalised approach to memory analysis, emphasising the use of symbols (debugging information) for accurate memory parsing.

Tool: Volatility 2

To obtain a high-level overview of the memory sample we are analysing as this is needed to run plug-ins correctly against the sample file. We use the imageinfo command. This command is primarily used to identify key details like the operating system version, service pack, and hardware architecture (32-bit or 64-bit).

Additionally, it provides other valuable information, such as the DTB (Directory Table Base) address and the timestamp indicating when the sample was captured. It may also be necessary to use additional scans to positively identify the correct profile and the correct KDBG address, however we will stick with the output from the imageinfo.

One thing to note is the time taken to perform this profile identification, on an 8GB sample file, the profile identification took just over three hours to complete. However, this varies based on the processing computer and size of the sample file.

Output from imageinfo vol2:

Now, prior to carrying out some analysis it’s a good idea to understand more about the memory we are processing, we can do this by analysing the memmap and Virtual Address Descriptor, commonly known as the “VAD” however we will not be covered in this blog.

Now that we have the profile, we can start analysing the memory, here are some examples of processing memory with vol2:

Output from Netscan vol2:

Tool: Hives

Volatility has the ability to carve the Windows registry data. However, the processing can take some time, as you need to find the physical addresses of CMHIVEs. To locate the virtual addresses of registry hives in memory, and the full paths to the corresponding hive on disk, we use the hivelist command. Once you have traversed into the key of interest, you can then print the content

Output from hivelist vol2:

It should also be noted that several plugins are available to support finding information in memory and dumping artefacts.

From looking at the help page of Volatility we can see a number of plug-ins, such as netscan, that can be used to help us process the memory further however we will leave volatility here.

Output from help page vol2:

Tool: Volatility 3

When we take a look at the plugins available for volatility 3 we can see they follow a different structure:

Output from help page vol3:

Furthermore, you may have notices we can start the analysis straight away without waiting for the dreaded imageinfo plugin to complete, like we had to perform with volatility 2.

In Volatility 2, the imageinfo command is necessary because it helps identify critical details about the memory sample, such as the operating system version, service pack, and hardware architecture (32-bit or 64-bit). One of the key functions of imageinfo is to locate and extract the KDBG (Kernel Debugger Block), a crucial structure in Windows memory that helps Volatility 2 parse and analyse the memory dump accurately. Without this information, many plugins in Volatility 2 would not know how to correctly interpret the memory layout, leading to potential errors or incomplete analysis.

In Volatility 3, however, the need to manually search for the KDBG structure has been minimised/eliminated due to the framework’s symbol-based analysis approach. Volatility 3 retrieves and uses debugging symbols to understand the memory layout and operating system structures more accurately.

This allows it to adapt more flexibly to different OS versions and configurations without requiring manual identification of key structures like KDBG. The reliance on symbols instead of static signatures means that Volatility 3 can automatically parse the necessary information, streamlining the analysis process and reducing the need for preliminary steps like running imageinfo.

Back to the analysis…

Once we trigger a plugin of interest against the sample file, we are notified of the progress state as shown below:

Output from netscan vol3:

For the output, very much the same as Volatility 2.

Output from netscan vol3:

Now whilst processing the sample file and scanning for files inside, I was able to identify some suspicious looking svchost files, or at least a file attempting to masquerade as svchost.exe running from a directory not expected. Svchost.exe is a Windows process that hosts services from DLL files. As a system program, svchost.exe is located in the system folder WindowsSystem32. This is a protected folder that cannot be accessed by users who do not have administrator privileges.

Output from filescan vol3:

From here we can continue to analyse the memory or pivot to disk to analyse the users shell bags and the MFT for further analysis. In the following blog post in this series, we will move onto another tool which has been pivotal in my forensic investigations in the last couple of years.

Before we move onto said tool, a final thought on the Volatility framework. I say this with a pinch of salt, Volatility 3 is superior to its predecessor, Volatility 2, due to its modern architecture, written in Python 3, which offers better performance and futureproofing. It introduces symbol-based analysis, allowing for more accurate and reliable parsing of memory structures, particularly in newer and more complex operating systems.

Volatility 3 also provides enhanced support for modern OS versions, faster processing, and a more modular design, making it easier to extend and maintain. Additionally, its user-friendly interface and improved output consistency contribute to a better overall user experience, making it a more powerful tool for memory forensics.

However, where analysis can still be supported by Volatility 2, investigators, including myself still use a combination of Volatility 2 and 3, here are some reasons why:

  • Mature Plugin Ecosystem: Volatility 2 has a larger and more mature plugin ecosystem. Many forensic analysts rely on the wide range of plugins available in Volatility 2, which may not yet be fully ported or available in Volatility 3.
  • Familiarity and Established Workflow: Volatility 2 has been around for a long time, and many professionals have developed workflows, scripts, and automation around it. Transitioning to Volatility 3 may require time to adapt and reconfigure existing processes.
  • Community Support and Documentation: Volatility 2 has extensive community support, documentation, and resources available online. While Volatility 3 is growing, the wealth of tutorials, guides, and community knowledge around Volatility 2 can make it easier for some users to stick with the older version.
  • Specific Use Cases: Some use cases or older memory dumps may be better supported in Volatility 2 due to the specific plugins or compatibility that hasn’t been fully integrated into Volatility 3 yet.

In summary for the Volatility framework, in practice, I generally use both tools for different objectives and to collaborate my findings.

Using both versions of Volatility in practice can provide several benefits:

Volatility 2 has a wider range of plugins, some of which may not yet be available or fully developed in Volatility 3. By using both versions, analysts can access the full suite of tools for different types of analysis, ensuring they don’t miss out on any specific functionality.

Volatility 2 and Volatility 3 handle different operating systems and memory structures with varying degrees of success. Volatility 3 excels with newer OS versions and complex structures due to its symbol-based analysis, while Volatility 2 might perform better with older systems or legacy formats. Using both ensures broader compatibility across different memory dumps.

Running analysis on both versions allows for cross-verification of findings. If one version struggles with a particular dataset or produces uncertain results, the other can provide a second perspective, increasing the reliability of the investigation.

In some cases, Volatility 2 might offer faster or more straightforward analysis for simpler tasks, while Volatility 3 is better suited for more detailed or modern memory structures. Having both versions available offers flexibility, allowing analysts to choose the most efficient tool for the specific task at hand.

For teams transitioning from Volatility 2 to Volatility 3, using both versions helps ease the learning curve. Analysts can continue using familiar workflows in Volatility 2 while gradually adopting Volatility 3’s new features and capabilities, reducing disruptions in the investigation process.

By combining both versions, forensic investigators can maximize their analytical capabilities, ensuring thorough and accurate memory analysis across a wide range of scenarios.

Summary

Using Volatility 2, Volatility 3, together in investigations can enhance the depth and accuracy of memory forensics. With Volatility, we can leverage the extensive plugin library of Volatility 2 and the modern, symbol-based analysis of Volatility 3. This combined approach ensures comprehensive coverage across different operating systems and memory structures, allowing you to cross-verify findings and achieve more robust forensic results.

At PTP we automate as much analysis as possible, significant automation can be built into these tools to streamline investigations, speed up results, and reduce client wait time. By scripting common tasks, such as running multiple plugins, generating reports, and automatically parsing key memory structures, analysts can automate repetitive processes, reducing manual effort.

Integration with orchestration frameworks or custom-built scripts can further automate memory dump collection, analysis, and reporting, enabling faster turnaround times while maintaining accuracy. This approach not only enhances productivity but also ensures consistency in forensic analysis, ultimately delivering quicker and more reliable results for our clients.

In the next post we will delve into MemProcFS – “Mounting Memory? – This changes everything!”