I recently shared a command on Twitter and asked folks if they thought this was something fishy. I want to take this opportunity to walk you through the steps that a threat hunter takes in day-to-day operations. This includes formulating a hypothesis, developing a query, and conducting an investigation.
Below is the poll I shared on Twitter and the final results that show the majority of people who thought this activity was malicious.
"cmd.exe" /d /c "C:\Users\<user>\AppData\Roaming\cmk.exe /d /c whoami"
In the comments, most people shared that the command and the surrounding context could be malicious and warranted a closer look. Although I wrote up a detailed response highlighting the steps I took to investigate this, I decided to document everything through a blog post so everyone could use it as a reference, as not everyone has a Twitter account.
My Investigative Process
In my case, it all started from a threat hunt when I wanted to look for the execution of renamed Windows binaries from an abnormal location. My query was looking for command line interpreters and other binaries running outside
C:\Windows default system directories. I also hypothesized that the binary would be renamed. The query looked something like the below:
Process_Name NOT IN (
AND Original_Filename IN(
OR [search index=myindex | regex cmdline="c:\\users\\[^\\]+\\[^\\]+\.exe"])
Note that you might find a lot of false positives with this query, and you might need to adjust it for your own environment. You might also need to apply some grouping or other data analysis techniques to the results. This is just an example based on SPL for demonstration purposes only.
I came across this command during the above threat hunt. I will now review my steps to investigate and confirm or disprove my hypothesis.
Step 0: What We Know
The command line displays a renamed cmd.exe binary running the whoami command.
cmk.exe: This is the renamed
cmd.exe. Threat actors often rename system binaries and copy them out of their designated system directory to evade detection by security systems that might be looking specifically for processes named
/d: This option disables the execution of AutoRun commands from the registry. Usually, when
cmd.exeis started, it will execute commands specified in the registry under
AutoRunsettings. This can affect the behaviour of the command prompt or disrupt the execution of scripts. Using
/dis a way to ensure a cleaner, more predictable environment by preventing these AutoRun commands from executing.
/c: This option carries out the command specified by the string and then terminates. In this context, it tells
cmd.exe) to execute the following command and exit.
whoami: This is a standard Windows command-line command that displays the current user's name. This is useful for confirming the identity of the user context under which the command line is running.
We also know that the renamed binary is not signed. We will go over the reason why this is happening further down.
Step 1: Gathering Context
First, I needed to understand the environment in which this command was operating. This meant looking at a broader time frame to see if this event was a standalone or part of a pattern. I examined the system activities 5 minutes before and after the command execution. Think of it like looking at security camera footage — you want to see what happened before and after the main event to get the whole story.
Step 2: Understanding Relationships
Then, I checked out the family tree of this process. What started
cmk.exe? Looking into the parent process and parent command line, we discover that the process started via
WmiPrvSE32.exebinary upon system startup.
Another execution entry that is part of this chain shows
runonce.exe the execution of the
WmiPrvSE32.exe binary. This chain of events is typical for programs set to run when a computer boots up. Assumptions can speed up root cause identification, but always validate them. Below is the Run Keys registry entry.
Pay attention to the path of this execution. The binary assembles a similar name to the known
WmiPrvSE.exe Windows binary, however, is located under an abnormal system location —
C:\Program Files(86)\Karmasis\Infraskope Agent\WmiPrvSE32.exe. This directory is used for installed applications on the host. Unlike the
WmiPrvSE.exe Windows binary also has 32 in the name. This could also trick an analyst into triaging a similar alert.
Step 3: Verifying the Files
In step 2, we identified the binary responsible for this execution of cmk.exe. Upon examining the path's directories, it is evident that Karmasis serves as the root directory for the running application
WmiPrvSE32.exe. A quick Google search reveals that this is likely a legitimate program.
To verify this, I had to check the hashes of these binaries and whether they matched with the legitimate software. The binary
WmiPrvSE32.exe was indeed used as intended and is part of legitimate software. Although I might not understand the ins and outs of this application, I now have a better understanding of the legitimacy of files that are part of the execution.
While the software in question is legitimate, we must ensure it is authorized to run on a host. To do so, I searched for the entire environment to identify any comparable behaviour exhibited by the software running on other hosts. I also cross-checked the list of authorized applications to confirm that this software is permitted on the network.
Some quick questions I set myself to answer were:
- Is the Infraskope Agent legitimate? — Yes
- Who installed it, and when? — Unknown/Irrelevant at this point
- Is it approved for use in our environment? — Yes
- Is it running on other machines? — Yes
- Is there any odd network activity connected to it? — No
Step 4: Documentation
After identifying and verifying the files in question, the next crucial step is documentation. This process is essential for threat hunting as it not only creates a record of the investigation but also provides valuable insights into the environment to prevent identical future investigations.
Creating a Comprehensive Documentation
- Detail the Findings: Start by outlining the initial suspicion or anomaly that triggered the investigation. This should include the specific command or process that was under scrutiny. In our case, it was the "cmk.exe" process.
- Chronology of Events: Provide a timeline of events. This should cover the discovery of the suspicious activity, the steps taken during the investigation, and the conclusion reached.
- Analysis and Interpretation: Document your analysis process. Explain how you interpreted the data, the tools used, and the rationale behind your conclusions. In our scenario, this would involve detailing how the query was developed, the relationship between different processes, and verifying the legitimacy of the files.
- Visual Aids: Include screenshots, flow charts, or other visual aids to help understand the sequence of events and the analytical process.
Ensuring Clarity and Accessibility
- Use Accessible Language: While your audience might be familiar with technical terms, ensure the report is written in a manner that is easy to understand. Avoid excessive jargon. This document may be viewed by others in your team, so keep it clear and concise.
- Highlight Key Points: Use bullet points or other formatting tools to emphasize important information, such as the indicators of compromise (IoCs) or any unique tactics, techniques, and procedures (TTPs) identified.
Archiving and Sharing
- Share with Relevant Teams: Distribute the report to relevant teams within your organization. This could include the security operations center (SOC), incident response teams, and IT management.
- Feedback Loop: Encourage feedback from other teams. Their insights might provide additional context or help refine future threat hunting processes.
- Update Organizational Knowledge Base: Add your findings to the organizational knowledge base. This can be a valuable resource for future investigations and aid in training new team members.
- Lessons Learned: Reflect on what was learned during the investigation. This should include any gaps in security identified and recommendations for addressing them.
- Update Policies and Procedures: Based on the investigation's outcomes, update your organization's policies, procedures, and security controls as necessary.
- Training and Awareness: Use the findings to inform training programs. This can help raise awareness about new threats and reinforce best practices among staff.
- Creating a Detection Rule: The first step in developing a detection rule is to refine the broad query by identifying the specific Tactics, Techniques, and Procedures (TTPs) that match the initial hypothesis. This exercise aims to create a detection rule based on the initial query.
Consider incorporating contextual information such as execution location, file paths, and process parent-child relationships.
In this blog post, we closely examine a threat hunter's investigation of a suspicious command during an ordinary threat hunt. The comprehensive investigation, which included dissecting the command, understanding its origin, and examining its behaviour, highlights how crucial it is not to rush to conclusions.
One of the critical lessons from this experience is the significance of context. Something that seems out of place might not necessarily be malicious; it could simply be unfamiliar. This little investigation serves as a great example of this principle. It's a reminder that in the security realm, a thorough understanding of the context is essential before judgment. This is one reason detection engineering is challenging, especially when developing rules for edge cases.
This blog aims to highlight the detailed and proactive nature of threat hunting and the challenges and pitfalls you may encounter daily. I hope you find it helpful, and keep an eye out for the next one by following me on Medium and Twitter as I aim to publish more insights like this one.