International Social Media

How Much Can Companies Know About Who’s Behind Cyber Threats?

Deciding when and how to publicly link suspicious activity to a specific organization, government, or individual is a challenge that governments and many companies face. Last year, we said the Russia-based Internet Research Agency (IRA) was behind much of the abuse we found around the 2016 election. But today we’re shutting down 32 Pages and accounts engaged in coordinated inauthentic behavior without saying that a specific group or country is responsible.

The process of attributing observed activity to particular threat actors has been much debated by academics and within the intelligence community. All modern intelligence agencies use their own internal guidelines to help them consistently communicate their findings to policymakers and the public. Companies, by comparison, operate with relatively limited information from outside sources — though as we get more involved in detecting and investigating this kind of misuse, we also need clear and consistent ways to confront and communicate these issues head on.

Determining Who is Behind an Action

The first challenge is figuring out the type of entity to which we are attributing responsibility. This is harder than it might sound. It is standard for both traditional security attacks and information operations to be conducted using commercial infrastructure or computers belonging to innocent people that have been compromised. As a result, simple techniques like blaming the owner of an IP address that was used to register a malicious account usually aren’t sufficient to accurately determine who’s responsible.

Instead, we try to:

  • Link suspicious activity to the individual or group with primary operational responsibility for the malicious action. We can then potentially associate multiple campaigns to one set of actors, study how they abuse our systems, and take appropriate countermeasures.
  • Tie a specific actor to a real-world sponsor. This could include a political organization, a nation-state, or a non-political entity.

The relationship between malicious actors and real-world sponsors can be difficult to determine in practice, especially for activity sponsored by nation-states. In his seminal paper on the topic, Jason Healey described a spectrum to measure the degree of state responsibility for cyber attacks. This included 10 discrete steps ranging from “state-prohibited,” where a state actively stops attacks originating from their territory, to “state-integrated,” where the attackers serve as fully integrated resources of the national government.

This framework is helpful when looking at the two major organized attempts to interfere in the 2016 US election on Facebook that we have found to date. One set of actors used hacking techniques to steal information from email accounts — and then contacted journalists using social media to encourage them to publish stories about the stolen data. Based on our investigation and information provided by the US government, we concluded that this work was the responsibility of groups tied to the GRU, or Russian military intelligence. The recent Special Counsel indictment of GRU officers supports our assessment in this case, and we would consider these actions to be “state-integrated” on Healey’s spectrum.

The other major organized effort did not include traditional cyber attacks but was instead designed to sow division using social media. Based on our own investigations, we assessed with high confidence that this group was part of the IRA. There has been a public debate about the relationship between the IRA and the Russian government — though most seem to conclude this activity is between “state-encouraged” and “state-ordered” using Healey’s definitions.

Four Methods of Attribution

Academics have written about a variety of methods for attributing activity to cyber actors, but for our purposes we simplify these methods into an attribution model with four general categories. And while all of these are appropriate for government organizations, we do not believe some of them should be used by companies:

  • Political Motivations: In this model, inferred political motivations are measured against the known political goals of a nation-state. Providing public attribution based on political evidence is especially challenging for companies because we don’t have the information needed to make this kind of evaluation. For example, we lack the analytical capabilities, signals intelligence, and human sources available to the intelligence community. As a result, we don’t believe it is appropriate for Facebook to give public comment on the political motivations of nation-states.
  • Coordination: Sometimes we will observe signs of coordination between threat actors even when the evidence indicates that they are operating separate technical infrastructure. We have to be careful, though, because coincidences can happen. Collaboration that requires sharing of secrets, such as the possession of stolen data before it has been publicly disclosed, should be treated as much stronger evidence than open interactions in public forums.
  • Tools, Techniques and Procedures (TTPs): By looking at how a threat group performs their actions to achieve a goal — including reconnaissance, planning, exploitation, command and control, and exfiltration or distribution of information — it is often possible to infer a linkage between a specific incident and a known threat actor. We believe there is value in providing our assessment of how TTPs compare with previous events, but we don’t plan to rely solely upon TTPs to provide any direct attribution.
  • Technical Forensics: By studying the specific indicators of compromise (IOCs) left behind in an incident, it’s sometimes possible to trace activity back to a known or new organized actor. Sometimes these IOCs point to a specific group using shared software or infrastructure, or to a specific geographic location. In situations where we have high confidence in our technical forensics, we provide our best attribution publicly and report the specific information to the appropriate government authorities. This is especially true when these forensics are compatible with independently gathered information from one of our private or public partners.

Applying the Framework to Our New Discovery

Here is how we use this framework to discuss attribution of the accounts and Pages we removed today:

  • As mentioned, we will not provide an assessment of the political motivations of the group behind this activity.
  • We have found evidence of connections between these accounts and previously identified IRA accounts. For example, in one instance a known IRA account was an administrator on a Facebook Page controlled by this group. These are important details, but on their own insufficient to support a firm determination, as we have also seen examples of authentic political groups interacting with IRA content in the past.
  • Some of the tools, techniques and procedures of this actor are consistent with those we saw from the IRA in 2016 and 2017. But we don’t believe this evidence is strong enough to provide public attribution to the IRA. The TTPs of the IRA have been widely discussed and disseminated, including by Facebook, and it’s possible that a separate actor could be copying their techniques.
  • Our technical forensics are insufficient to provide high confidence attribution at this time. We have proactively reported our technical findings to US law enforcement because they have much more information than we do, and may in time be in a position to provide public attribution.

Given all this, we are not going to attribute this activity to any one group right now. This set of actors has better operational security and does more to conceal their identities than the IRA did around the 2016 election, which is to be expected. We were able to tie previous abuse to the IRA partly because of several unique aspects of their behavior that allowed us to connect a large number of seemingly unrelated accounts. After we named the IRA, we expected the organization to evolve. The set of actors we see now might be the IRA with improved capabilities, or it could be a separate group. This is one of the fundamental limitations of attribution: offensive organizations improve their techniques once they have been uncovered, and it is wishful thinking to believe that we will always be able to identify persistent actors with high confidence.

The lack of firm attribution in this case or others does not suggest a lack of action. We have invested heavily in people and technology to detect inauthentic attempts to influence political discourse, and enforcing our policies doesn’t require us to confidently attribute the identity of those who violate them or their potential links to foreign actors. We recognize the importance of sharing our best assessment of attribution with the public, and despite the challenges we intend to continue our work to find and stop this behavior, and to publish our results responsibly.

Source : Facebook

About the author

No Web Agency Staff

No Web Agency est un site spécialisé dans la Publication & Diffusion de Communiqués de Presse, actus... édité par Sébastien Mugnier !

Notre objectif est simple, c’est d’accompagner les entreprises dans le développement de leur image, comme diffuseur de leurs actualités, ou encore en relais de leur stratégie marketing (lancement de produits, salon…).

Bienvenue sur No Web Agency

NO WEB AGENCY est un site spécialisé dans la Publication & Diffusion de Communiqués de Presse, news…, édité parSébastien Mugnier !….++

MATOS & HARDWARE