Hacker OPSEC

STFU is the best policy.

The Paddy Factor

The Paddy Factor was a disparaging term used by the British security forces to refer to poor OPSEC practices by the Provisional IRA (PIRA) in the early 1970s. Much of this terrible counter intelligence posture was due to a limited number of easily avoidable activities that combined to compromise many Provos:

  • Self incrimination

    • PIRA members would congregate in pubs and sing IRA songs.
    • They would boast about their IRA operations while drunk in pubs.
    • They would reply with a nod and a wink to friendly inquiries about their activities, making it easy for informants to identify them.
    • They would march in pro-IRA rallies.

    Problem: The adversary was able to easily identify (some) PIRA members. Once the adversary identifies members of an organisation, they will investigate and monitor them to uncover other members.

  • Contamination PIRA members would associate with each other when not on operations. In intel parlance this is called “pre-operational contact”, and it is to be avoided. The reason is that any surveillance on one member will reveal the other members of the group. This is a form of contamination.

In short, some (many?) members of the Provisional IRA made their affiliation publically known by bragging about their operations in public places. This made them known to the adversary (the British security forces), who were then able to monitor those known PIRA members. Later, at political events such as rallies, these known PIRA members would hang out and chat with their unknown underground brethren. This made the underground members known to the adversary, with the obvious negative consequences.

Link Analysis and You

Knowing only a single node in a network, e.g. one member of an organisation, and monitoring which other nodes it contacts with gives insight into membership of the graph. The police, for example, use a variety of monitoring techniques to build up phone trees which map out organisational relationships.

This form of analysis, mapping associations between nodes in a network (e.g. membership in an organisation) is called link analysis. It can be used against communication end points (e.g. mobile phones, email addresses), which are then associated with individuals. For example, link analysis of mobile phone numbers and contact address books of drug dealers is used to determine hierarchical information about their distribution networks. Link analysis is a very powerful method of understanding relationships and being able to link “chatter” between nodes as activity related to an organisation.

How to unlink

One solution to making link analysis harder and less useful is to create unique nodes for each connection. Done successfuly, this creates link graph is only 2 nodes and 1 edge. In practise, this means that every connection between peers should be unique to that connection, i.e. create a new jabber identity for each associate you have. Do not share these jabber IDs between different friends. The rule is simple: 1 friend, 1 jabber ID.

These node to node links should be changed regularly as well. The old nodes must never contact the new nodes. That will contaminate them, create a link that associates them together. New clean break each time.

Conclusion

It is possible to defeat link analysis, but it takes discipline and is hard to do successfully. Every single communications end point must be unique and dedicated to only one other end point. These end points must never contaminate each other by interacting or mentioning other end points. This will inhibit creating a phone tree, or link analysis chart of organisation membership.

Warning: unlinking will not prevent traffic flow analysis, fingerprinting, or many other techniques from linking comms end points. But it is square one.

Anonymity Is Hard

Anonymity in the real world is very hard

In late 2011 Hezbollah rolled up a CIA spy ring in Lebanon. This provides an interesting lesson in CIA tradecraft and real world counterintelligence. Close examination of the techniques used to track down the agents will reveal some serious problems with many systems designed to provide security for anti-government groups.

This post is partially in a response to Matt Green’s post about encryption apps. The secrecy provided by encryption applications, primarily privacy of communication content, is not sufficient to protect against even minimal monitoring. Any anti-government activity in a modern environment, e.g. one involving mobile phones and the internet, needs to include anonymity first and foremost.

The Sources

This article provides some of the details about the tradecraft of the spy network, and how it failed. The focus is on how the agents were contacted by their handlers and how this was used to uncover the whole network. Another site provides a large collection of related articles which fills in some additional details.

The Tradecraft (Probably, maybe)

NOTE: This information is based on newspaper articles, so it is of limited accuracy. However, it seems like reasonable tradecraft practices that even amateurs would devise and is thus presented for analysis here.

  • Dedicated mobile phone The agents had a mobile phone used specifically for communication with their handler. This phone was kept in a static location waiting for contact, and possibly spent a lot of time switched off.

  • Pre-arranged meeting place The agents had a meeting location (allegedly at a Pizza Hut) where they met their handlers. This location was (allegedly) reused for multiple agents and multiple meetings.

  • Signalling code word Contact by the handler to the agent was via a code word (allegedly: “PIZZA!”), which was meaningless by itself but also contextually anomalous.

Well, thats one way to do it

The adversary, Hezbollah, used access to the telephone company logs (they have those), and searched for atypical mobile phone usage patterns:

  • phones that only receive a few calls / messages over long periods of time
  • mobile phones that are never mobile
  • weird / unusual messages (PIZZA!!)

That is, they were looking for phones that were kept at home, turned on occasionally, and only received calls/sms infrequently. The exact usage pattern one would expect for a mobile that is used exclusively for a handler to contact an agent.

This data gave Hezbollah a general location (down to the apartment complex) of where the agents were located. Next, the adversary correlated the location data with the home addresses of members who had access to secret information. They conducted surveillance on those members and discovered they were using a Pizza Hut to meet with their handlers.

Speculation Finally, the adversary was able to continually monitor the meeting location and detect other members of the spy network meeting with their handlers. The CIA is known for over using the same locations for meetings [1].

[1] The C.I. Desk

Computer says No

The problems with these tradecraft practises are pretty obvious from a counterintelligence analysis. They are anomalous (atypical mobile phone use) and they are rigidly predictable (reusing the same meeting location).

For encryption applications used by anti-government forces this provides a clear blueprint for action - look normal. Even basic monitoring of traffic will reveal anomalous activity which can be used to identify who needs to be watched more closely.

Conclusion

Hiding anomalous activity is hard, but vitally important. The problem with many security systems based purely on secrecy is that their usage is itself anomalous. It singles out and attracts attention to the users. If the adversary doesn’t know who those users are initially, they can cross correlate real world data with the suspicious activity and narrow their focus to real people. Those people can, and will, end up dead.

Resevoir Dogs: Lessons in OPSEC

Introduction

The cult movie classic Reservoir Dogs distills and imparts a number of important operational security (OPSEC) lessons. Although a work of fiction, the counter intelligence measures enacted by the gang were real standard operating procedure (SOP) for terrorist groups such as Fatah and the Black September Organisation (BSO). These OPSEC methods provide effective protection against informants participating in the operation. The weakness for this SOP is from informants at a higher level who have oversight of the operation.

The Reservoir Dogs OPSEC SOP

Procedure 1: Assigned Operational Aliases

  • Operational aliases for the duration of the op assigned by the organisation

    Using random aliases unique to the operation reduces the information available to informants who are involved in the op.

Procedure 2: Rapidly assembled cherry picked team

  • Just In Time team formation

    Creating the team just when it is needed reduces the time available for informants to find out about an operation and report it back to their handlers.

Procedure 3: Dedicated operational support teams

  • Dedicated Independant Operational Teams

    Dedicated teams conducting operational support roles ensures that each team, and its members, knows only their own small portion of the plan. For example, the pre-operational intelligence and surveillance are conducted by dedicated teams, separate from the team that conducts the operation.

Strengths:

This SOP provides a number of important protections against monitoring and infiltration by security forces.

Secret Agents

The agents are kept undercover until they are required to fulfill mission objectives. This both protects them against discovery by security forces, and also limits the quantity and quality of information available to any informants. For maximum effectiveness team is formed immediately prior to preoperational training and then kept isolated until after the operation is complete.

Mr Pink

Using assigned aliases limits the information that an informant can gather during an operation. Because the aliases are assigned rather than chosen, it is not possible for an agent to develop a preference for a particular alias and thus create an identity.

Weaknesses:

The Reservoir Dogs OPSEC SOP has a number of inherent weaknesses which can: limit its effectiveness; expose large numbers of agents to capture, and even directly lead to mission failure.

Inefficient Teams

Ad Hoc hastily assembled teams are less effecient, and possibly less effective, than long standing teams. The team lifecycle of: Forming-Storming-Norming-Performing is compacted into a reduced timeframe which inhibits achieving the higher levels of efficiency.

High Value Targets

Talent pool exposed to high level members. Knowledge of the group’s membership is heavily concentrated in a few individuals, rather than dispersed amongst the rank-and-file. These select individuals become high value targets in position to cause significant damage to the group if compromised.

Single Point of Failure

Single point of failure. The operational team captain is the only member of the team who knows the complete operation plan. The individual team members are unable to carry on the mission should the captain be eliminated.

Conclusion:

The Reservoir Dogs OPSEC SOP is an effective collection of techniques to protect a large group of agents against internal informants. The threat of a compromised internal member of a group is very likely the single greatest threat facing an underground organisation. This is demonstrated by the extreme lengths the PIRA went to hunting down informants, the dismantling of Lulzsec via a highly placed penetration, the extreme violence visited upon criminal informants (“snitches”, and “rats”), etc. etc. The Resevoir Dogs SOP provides a methodology to mitigate against all but the highest level penetrations.