The Indirect Realism of Threat Research
Metaphors for interpreting malicious activity in cyberspace
When investigating malicious cyber activity - whether in the context of threat intelligence, threat hunting, or detection engineering - we must keep in mind that we’re not observing said activity directly, but rather interpreting its effects on reality through a particular lens composed of two parts:
The quality and quantity of our data - the breadth and depth of telemetry to which we have access (network traffic logs, runtime detection alerts, file samples, cloud control plane logs, SaaS platform logs, etc.), our sources of metadata enrichment (such as VirusTotal or Shodan), and our corpus of past incidents.
Our conceptual framework - our assumptions about how attackers operate, their motivations, their goals, their tooling, and how their behavior might express itself in our data.
In other words, our perception of any given malicious activity is always indirect and subjective, as it depends on the parameters of our working environment: our collection capabilities, our analysis tools, our methodology; our clientele, their sector, their geography, their platforms, their hardware, their software; the actors targeting them, and many other variables.
Every vendor, government agency, research group, and independent researcher might view the same malicious activities through their own porthole, analyzing a unique cross-section of cyberspace. In the words of Kurt Vonnegut:
[Billy was] strapped to a steel lattice which was bolted to a flatcar on rails, and there was no way he could turn his head or touch the pipe. The far end of the pipe rested on a bi-pod which was also bolted to the flatcar. All Billy could see was the little dot at the end of the pipe. He didn’t know he was on a flatcar, didn’t even know there was anything peculiar about his situation. The flatcar sometimes crept, sometimes went extremely fast, often stopped - went uphill, downhill, around curves, along straightaways. Whatever poor Billy saw through the pipe, he had no choice but to say to himself, “That’s life”.
As for threat actors, we must always recall that they’re real people - albeit working within constraints defined by organizations, bureaucracies and social norms - meaning they exist across all aspects of reality. As they traverse the various media of cyberspace, they leave traces that we might detect in the course of our monitoring and investigation.
A hacker pressing a key on their keyboard in a cubicle somewhere in Moscow or Beijing may eventually and indirectly cause a new row to be recorded in one of our telemetry databases, but that can hardly be said to be the full picture. Our task is to interpret such signals as more than mere shadow puppetry, and to deduce what other signals may exist beyond our line of sight.

Our ability to observe any given type of evidence depends on the sophistication and configuration of our instrumentation, and collecting that evidence requires pointing our devices at different “layers” of our subject matter. This is analogous to using a combination of infrared, X-ray and radio telescopes in astronomy, while also operating robotic probes to retrieve exotic materials from asteroids and employing chemical analysis techniques such as gas chromatography to determine their composition.
Much like flatlanders, at our weakest and most solitary, threat researchers are akin to two-dimensional creatures striving to make sense of the chaotic goings-on of multi-dimensional space. Moreover, we find ourselves contending with malevolent higher beings that appear to span the unknowable infinitude of cyberspace.

For example, a resourceful attacker targeting our organization as a whole is likely to elude our comprehension if we’re overly focused on network traffic analysis while ignoring their activity on endpoint and mobile devices or across cloud and SaaS platforms. Moreso if we wrongly assume that network traffic is all there is.
But all is not lost - we can invest in intelligence to guide our efforts (and product roadmap), join forces with others working towards the same goal (each bringing their own unique perspective), expand our telemetry collection apparatus, and point it in the most promising directions. So long as we know more or less what we’re looking for and have a well-trained nose for novelty, we can jointly overcome our individual sensorial limitations and more accurately perceive the motion of our adversaries on the wire.

I’ve found that methodically monitoring for apparent “plot holes” is an effective method of revealing missing signals (false negatives), indicating hidden aspects of malicious activity that we’re not perceiving to the fullest, taking place in higher dimensions, so to speak.
For instance, if we detect a machine in our environment in communication with a known C2 server but without detecting any files that would otherwise indicate the presence of malware, we can immediately deduce that we’re missing something.
Similarly, a compromised server with no prior risk detections (assuming we’re scanning for such things) might indicate a potential 0-day vulnerability affecting the software installed on it. This technique can be automated, and is applicable to both threat hunting and detection engineering.
The next step in such cases is to determine whether we need to adjust existing detection rules, add new ones, or perhaps collect a brand new type of telemetry, essentially expanding the scope of our research into yet another dimension of cyberspace (fun!).

