Hacker OPSEC

STFU is the best policy.

Jihadist Fan Club CryptoCrap

Think of Mujahideen Secrets as a branded promotional tool, sort of like if Manchester United released a branded fan chat app.

Although there has been a lot of FUD written about the encrypted messaging systems developed and promoted by jihadis groups, very little has focused on the how they are actually used. I wrote some notes about this earlier but wanted to expand on the subject in more depth.

Web Warriors: Security Practices on Jihadi Web Forums

There are a number of internet web forums that are used by supporters of the various jihadi groups fighting in the middle east. These sites are primarily cheerleading and “in grouping” social networks, rather than opertational message boards.

An important point to understand about these online forums is that they are about group dynamics. They provide a mechanism for people to feel like they are part of the struggle with a graded scale of committment. They dont actually need to worry about getting their hands dirty or risking their lives (technically, they might be risking their lives and freedom).

The sites all attempt to educate their users on security best practices, for example the Islamic State (nee ISIS) web forum heavily promotes the use of TAILS, AQAP advocates for Tor usage in a 9 page guide. Despite this, few users actually bother with security precautions. Indeed, many continue to use Facebook and Skype as their primary communications channels with fellow online jihadists.

The encryption tools are branded software for self identifying jihadis to feel like they belong. Indeed, other than the media outlets who emphasise the use of the tools (branding and messaging), the actual jihadis have a hard time using the tools. Actual web jihadis complain of usability problems that prevent them from using the tools.

The media outlets for the different groups: IS, Nusra, AQ, all make sure that their followers know about their own branded encrypted messenger. Indeed, this is the primary clue to how these apps are actually used. They are branding tools that promote in-group sentiment. “I’m using the AQ encrypted messenger, so I am basically AQ”. These tools deliberately identify the user as a jihadi associate, not by accident or due to bad security practice, but rather as a deliberate part of their value proposition. “Use our encrypted messaging app and you will securely let the world know that you are with us!”

mujahideen secrets

All of the major apps are simply branded wrappers around industry standard libraries, ciphers, and protocols. There is nothing particularly Islamic or Jihadist about them except the branding. That is because the branding is actually the point. These are just social signals. Using AQAP’s messaging tool is the rough equivalent of wearing a sports jersey. It signals to others that there is group identity. (Of course, given the outlaw nature of these groups it seems like an extremely poor life decision)

These apps are not designed for actual clandestine operational use. They are for making a social statement. Signaling membership in a peer group. Despite this simple purpose for using the apps, there is still remarkably low uptake amongst the online jihadist set who still primarily rely on Facebook and Skype for comms.

So if almost no one is using the encryption apps, and those that do are using them to signal membership in a broader organisation, what are the real jihadis using operationally? Facebook.

Jihadi Operational Covert Communications:

There was a Facebook account “sniper outside the law” that was posting clear text, but coded, messages believed to be related to jihadi operations in Tunisia. The account has been taken down and the guy running it was arrested.

Here are some examples of what he was posting (taken from here):

1
Eagle 1 group please change route to k :?via trees !.ch
1
Refiling will be through the loaded mule same place of refiling thank you
1
2
3
(Yesterdays posts (before today's attack))
To all "units" please change direction towards .?k1 after 500m (meters?).
Info came from scout about invaluable avant-post
1
2
3
Expecting news in the coming days we promise heavy news(important),
For those fighting Islam? wake up before it is too late you traitors
and snitches you will regret your tyranny

Jihadi Encryption Is Overrated

The key take away is that the encrypted messaging apps from ISIS or AQAP are as operationaly relevant as an encrypted messaging app from Man U or Liverpool. It might be exciting for some hardcore fans who want to show their support, but the real players don’t touch the stuff.

Real jihadis use secure codes and couriers, not some Android toy My First Crypto Chat.

Must Read: An article by Kryt3ia (published minutes before me, the swine!)

When in Doubt, It’s a Tout

When in doubt, it’s a tout

Robust Operational Security Practices Aren’t Enough

A British man, Lauri Love, has been indicted for hacking. The indictment is thin on details, but does have some interesting OPSEC insights that can be teased out by the patient reader.

The indictment of Lauri Love doesn’t reveal much about how he was identified. There is some interesting info about the operational security measures taken by his crew, and they appear robust. The lack of information on how Mr Love was caught, along with the revelation of good security practices suggests one thing: informant.

This post will only highlight the good operational security practices of the hacker group, since we don’t know what the mistakes were.

Indictment Critical Analysis

The indictment lists four members of the crew:

  1. Lauri Love, “nsh”, “peace”, “route”
  2. CC-1 “in New South Wales, Australia”
  3. CC-2 “in Australia”
  4. CC-3 “in Sweden”

If I were to venture a guess, I’d reckon that CC-1 was caught first and became the informant used to take down the crew. I think this because CC-1 has the most specific geographic information, and the others are more vague in their location. As if there was a lot of effort invested in locating CC-1, and then the investigation focussed in on Mr Love.

Timeline

  • October, 2012: Start of the conspiracy
  • October 2, 2012: Army Network Enterprise Technology Command (“NETCOM”) hack
  • October 6, 2012: log of nsh on IRC discussing NETCOM hack with CC-1, later w/ CC-2
  • October 7-8, 2012: Army Contracting Command’s Army Materiel Command (“ACC”) SQLI hack
  • October 10, 2012: LOVE discusses ACC hack on IRC

  • October, 2013: End of the conspiracy

Hacking 101

The crew used scanners to locate vulnerable servers to exploit, and they shared the findings via their IRC.

peace: so can pivot and scan for other vulns [vulnerabilities] 
peace: we might be able to get at real confidential shit

The crew used SQLI and ColdFusion exploits.

The crew used proxies and Tor to mask the origins of their attacks.

conceal their attacks by disguising, through the use of Proxy Servers, the IP addresses from which their attacks originated. Defendant LOVE and the other Co-Conspirators further used the Tor network, which was an anonymizing proxy service, to hide their activities.

Operational Security Measures

Migration

The crew moved comms to new systems and changed their identities when they did so. This is a very good practice. Unfortunately, it appears that at least one member was logging the comms traffic. This created a security problem that could be exploited by the authorities.

route: consideration 1 : behaviour profile should not change 
route: public side i mean 
route: so whatever "normal", activities we do 
route: should continue 
route: but we move from this irc to better system 
route: also 
route: these nicks should change 
route: i think 
route: when we get on new communications 
route: all new names

OPSEC Violation: No logs, no crime. Do not keep any unnecessary logs. If there is operationally critical information, make a record of that information. Practically, this means: cut and paste into a file; keep that file encrypted.

OPSEC Lesson: Migrating communications infrastructure and changing identities regularly is a good idea. It creates chronologically compartmented silos of info that limit the impact of a compromise. It can provide plausible deniability, and it can reduce the severity of a compromise. Do not contaminate between the compartments. And, of course, ensure that each commo channel is secure.

Logistical Compartmentation

For at least some operations (all?) the crew spun up a new dedicated support server. This compartmented server was then discarded after use to minimize the connection to the group and any other operations. This is very effective OPSEC.

CC#2: but server must have no link to you or us
peace: :)
CC#2: when done we kill it
CC#2: for this plan
CC#2: we can reopen another one for other ongoing stuff
CC#2: but once this plan done we need to make sure they cannot all trace it back to us

OPSEC Lesson: Compartment as much as possible for each operation to avoid linking separate ops together. This also helps contain the damage if an operation is compromised and an investigation launched. Dedicated logistical infrastructure is best. Don’t forget to santize it, both at the beginning and the end of the op.

Conclusion

Even a group with robust operational security practices is vulnerable to the oldest trick in the book: the informant. The take away lessons are slightly more interesting:

  • Migrate comms and identity on a regular basis
  • Never store incriminating logs
  • Compartment heavily, and sanitize frequently

So it is sad news for Mr Lauri Love facing hacking charges, but at least there’re some valuable OPSEC lessons for the rest of us. Remember: No logs, no crime.

Episode 17

[this email was in response to a thread which started as a distress call over the unusually poor quality of CFP proposals. It is the start of some thoughts over how to “fix” the Info Sec Conference problem. ]

X-Mailer: iPhone Mail (9A405)
From: the grugq <thegrugq gmail com>
Subject: Re: [redacted: name + title of the guilty talk]
Date: Thu, 5 Jan 2012 11:05:12 +0700
To: [conference committee list]

>> I have a different take on it [redacted-name]. I feel there is a lot of new
>> security research and work being done out there but it is being hidden
>> by the flood of introductory/survey/low-value talks. With 1,791 infosec
>> talks at cons record in 2010 (source: http://cc.thinkst.com/statistics/)
>> as an industry we've fucked ourselves and have elevated the role of a
>> speaking spot at a conference to something mythical and special when in
>> reality it has been watered down to the level that we've seen thus far
>> with the submissions to [this conference]


I agree to a large extent with this analysis, but I think there is
another facet that hasn't been brought up yet, which I call the "Episode
17a Ensign #3" problem.

(I'll be incendiary first, so if you're impatient you can stop reading
now and start flaming.)

Essentially (most) security cons are comic / star trek conventions, but
with less cosplay and even fewer girls. The conference talk might be
styled (somewhat) on the academic lecture, but realistically the
audience would rather a Steve Jobs style product unveiling than a
lecture. They want some background info to ground themselves and align
expectations, then they want the big product reveal at about 40 minutes
in; and for a real treat, a "one more thing".  (for product unveiling
see demo; and don't forget the tool release: "available right now, you
can download this today,... and hack the shit out of something")

This is entertainment, it is not knowledge transfer.

• most regional cons would be vastly improved as informal peer training
activities focused events. Like the LUGs and Python groups and so on.
Regular meetings to actively do something with a few "event centric"
talks thrown in as part of the evenings entertainment but also to guide
the discussions and activities along.  That's how you get people
learning shit, have them actually do it. Novel concept, eh? ;)

• the big cons get big names cause they have a symbiotic relationship.
And it doesn't require any backhanded arrangements; as a researcher with
a new topic to present, you're faced with two choices: blow your wad at
NoNameRegional Con, or save it for MassiveMediaExposure con in 4 months.
Guess which one will work more towards getting you laid?

This is why the big cons get the hit singles and the small cons get
supporting acts and "best of greatest hits" talks. It's part of why I
think conferences aren't helping the community very much.

• other problems include the high value that original research
frequently has, far in excess of the cost of the price of a ticket and
hotel... This makes independent researchers inclined to maximize value
on the market directly, rather than indirectly through conference driven
reputation building. For employees, they're in a similar situation
except their employers want to minimize liability and maximize ROI on
their big name researcher. So they aren't keen to release anything super
awesome, for free,  at a con (i.e. someone else's branded event).

So that leaves a reduced set of potential speakers, combined with an
incentive to present something sufficiently interesting to provide
entertainment but not sufficiently useful enough that it decreases in
value. Note: I say these are incentivized behaviors, not what everyone
(or anyone) does or wants to do.

• as a conference that isn't swamped with submissions, that means you
have to be proactive. For SyScan Taiwan 2011, we made a hit list of
topics we wanted, and another list of people who were either subject
matter experts on a target topic, or whom we wanted to meet up with. We
then spent about 6 weeks chasing every single speaker down personally
and inviting them to speak. In the end, if you see our line up, I think
it is fair to say this is an effective strategy for getting an AllStar
line up.

Obviously this isn't effective at finding new talent, because you can't
chase down someone you don't know exists).

That's why we, as a community need breeder events that help to make the
existing conferences stronger by finding the new talent, encouraging
them to develop their technical skills and their presentation skills
(they got to learn to entertain an audience for an hour, ). Presenting a
bit of research at the local security meetup is a good start to a career
of talking about typing on a keyboard...

Oh right, so how we're all just at a cosplay-free comic con.

So the one hour talk format isn't good for knowledge transfer,  it
rewards entertainers more than pure researchers. This leads to a few
super rockstars who deliver(ed) the goods, and know how to do a product
unveil at 42 minutes into their slot. This ends with a few Shatneresque
rockstars and loads of "ensign #3 from episode 17a, the one where
Shatner massaged the heap for an hour and then dropped shells all over
everything, it was the first time he did a multiple root in public. So
cool!!!"

The 1 hour presentation format is completely shit for knowledge
transfer. I  hold by the barcon inspiring theory that your new research
is either simple enough that you can explain it over a beer(ie .5min of
content) or something so complex that I want the white paper version to
work through at my own pace. There is genuine frustration at the
(frequently) horrible Product Unveil style talks which take an hour to
reveal 5 minutes of content.

On the other side is the frustration at talks which are made up of
potentially interesting info, but the slide deck is all lolcats, the
code is never released, and the presenter never writes up the white paper.

New York’s Finest OPSEC

NYPD Social Media Investigation OPSEC

The NYPD created an operations formula for conducting undercover investigations on social media. The procedural document reveals the operational security for these investigations. The security is founded on the use of an “online alias” (the officer’s undercover account) and strict compartmentation. Given the capabilities of the adversaries that the NYPD faces this is probably sufficient security.

It is a fascinating glimpse into the operational process of an investigation. Definitely worth reading to get a sense of what the police face when conducting an online investigation (hint: paperwork).

Core NYPD OPSEC

Fundamentally this is basic operational security grounded on compartmentation. The use of dedicated hardware, and pseudononymous internet access, allows the officer to create and operate an online undercover account without any links to the NYPD. The basic security precautions are designed to protect the officer’s laptop from being compromised. A compromised laptop could enable the adversary to conduct a counterintelligence investigation.

  • Compartmentation:
    • Use dedicated hardward and pseudononymous internet connection (laptop + “aircard”)
    • Avoid accounts, usernames, passwords associated with NYPD
    • Avoid personal accounts and internet access
  • Basic Computer Security:
    • Delete “spam”
    • Don’t open attachments
    • Exercise caution when clicking on links

This is very basic stuff, but should be more than sufficient against the adversaries that the NYPD pursues. These adversaries should not have access to any of the records of the phone company supplying the internet access.

Primary Document

Here is the information that is required to create the undercover account:

  1. Username (online alias)
  2. Identifiers and pedigree to be utilized for the online alias, such as email address, username and date of birth.
  3. Do not include password(s) for online alias and ensure password(s) are secured at all times.
  4. Indicate whether there is a need to requisition a Department laptop with aircard.
  5. Review photograph to be used in conjunction with online alias, if applicable.
  6. Consider the purpose for which the photograph is being used and the source of the photograph.

Here is the full section dealing with operational security:

Operational Considerations

When a member of the service accesses any social media site using a Department network connection, there is a risk that the Department can be identified as the user of the social media. Given this possibility of identification during an investigation, members of the service should be aware that Department issued laptops with aircards have been configured to avoid detection and are available from the Management Information Systems Division (MISD). A confidential Internet connection (e.g., Department laptop with aircard) will aid in maintaining confidentiality during an investigation. Members who require a laptop with aircard to complete the investigation shall contact MISD Help Desk, upon APPROVAL of investigation, and provide required information.

In addition to using a Department laptop with aircard, members of the service are urged to take the following precautionary measures:

  1. Avoid the use of a username or password that can be traced back to the member of the service or the Department;
  2. Exercise caution when clicking on links in tweets, posts, and online advertisements;
  3. Delete “spam” email without opening the email; and
  4. Never open attachments to email unless the sender is known to the member of the service.

Furthermore, recognizing the ease with which information can be gathered from minimal effort from an Internet search, the Department advises members against the use of personal, family, or other non-Department Internet accounts or ISP access for Department business. Such access creates the possibility that the member’s identity may be exposed to others through simple search and counter-surveillance techniques.

Conclusions

Undercover operations online rely on very basic operational security. Primarily compartmentation and reviews to ensure that the account isn’t going to be associated with the NYPD.

A Fistful of Surveillance

The publication of this piece at The Intercept about NSA targeting via mobile phones prompted me to release this collection of notes. Some quotes and statements in the article wrongly promote the idea that the SIM card is the only unique identifier in a mobile phone. I’ve enumerated the identifiers that exist, and they go far beyond the SIM card. At a minimum the physical identifiers of a mobile phone are the IMSI and the IMEI, that is the SIM card and the mobile phone hardware itself.

This is a short collection of notes I’ve put together on how you can be identified via your mobile phone. If you want to securely use a mobile phone, you’ll need to use a burner. This is non-trivial. Here’s a good guide.

Clandestine Mobile Phone Use

Mobile phones should primarily be used for signalling, rather than for actually communicating operational information. Remember the golden rule of telephone conversations:

  • keep it short
  • keep it simple
  • stick to your cover

Identifiers

  • Location
    • Specific location (home, place of work, etc.)
    • Mobility pattern (from home, via commuter route, to work) – very unique, 4 loc’s will identify 90%
    • Paired mobility pattern with a known device (known as “mirroring”, when two devices or more devices travel together)
  • Network
    • numbers dialed (who you call)
    • calls received (who calls you)
    • calling pattern (numbers dialed, for how long, how frequently)
  • Physical
    • IMEI (mobile phone device ID)
    • IMSI (mobile phone telco subscriber ID)
  • Content
    • Identifiers, e.g. names, locations
    • Voice fingerprinting
    • Keywords

Mitigations

Turn it OFF, for real.

Know how to turn the phone to a completely off state. This means removing the battery, taking out the SIM card and placing in a shielded bag (if possible). This really off state is how you store and transport the phone when not in use.

A note on storage: it should not be at your house or anywhere that is directly linked to you.

Take a hike, buster

Where you use the phone is itself very important. Never use it at locations which are associated with you, that means never at home, never at the office/work, never at a friend’s house. Never have the phone in an ON state at locations that are associated with you, or your immediate social network. Never.

Do not turn the phone in the same location as a phone associated with you. Make sure that your real phone is somewhere else, but not in an OFF state if possible. You don’t want the disappearance of one phone from the network to coincide with the appearance of another. Paired events are indicators of relation, and you want to avoid those as much as possible. You also want you regular phone to appear with a typical usage pattern, which means keeping it on as you normally would.

Contamination, avoid it

Never use different phones from the same location.

Never carry phones for different compartments together (keep them turned off, batteries out)

Never carry phones turned on over the same routes you normally take. Avoid patterns and predictability.

Codes, What Are They Good For?

What is a Secure Communication?

The goals of secure communications are the following. Some of these are surprisingly difficult to achieve:

  1. Make the content of a message unreadable to parties other than the intended one(s)
  2. Make the meaning of a message inaccessible to parties other than the intended one(s)
  3. Avoid traffic analysis — don’t let other parties know that a connection exists between the communicating parties
  4. Avoid knowledge of the communication — don’t let other parties know the communication channel or pathway exists

The first and second objectives can be accomplished using some combination of cryptography and coding. Unfortunately, this is the easy part. The more complicated and difficult component of a secure communications infrastructure is achieving the third and fourth objectives. For now, however, I will focus only on the first two issues: protecting content, and meaning.

First lets define our terms so we can discuss the subject with clarity:

  • Cryptography systems that use transformation processes to turn signal into noise, by obscuring the symbols used for communication
  • Coding systems that substitute or alter meaning, and thus hide the real message

The Eagle Has Landed

Codes are extremely useful mechanisms for sending small messages, although as they are plain text their hidden mean can be revealed once the key is cracked. Another issue with codes is that they are inflexible, compared to a cipher system. Coding requires pre-arranged mappings of meanings (what symbols or words translate to what), or at least pre-arranged mechanisms to derive the mappings (e.g., book codes).

To be effective, a code must maintain proper grammar, be consistent, and fit a plausible pretext. If it fits these requirements, and is used appropriately (briefly, consistently, with cover for action) then a code system is an excellent choice for simple signalling purposes.

Doing It Right

During World War II the BBC cooperated with the intelligence services to send open code signals to operatives in the occupied territories. These signals were prearranged with the operatives, and then sent out at two scheduled times. This signalling channel was used exclusively for indicating whether an operation was going to take place.

The BBC would broadcast the signal for the first time at 1930, and then confirm the signal at 2115. If the operation had been canceled before the second scheduled signal window, the code phrase would not be repeated.

During the early phase of the war, the code system was slightly more complex. There would be a positive code, and a negative code, for example: “Jeanne sends her greetings” might be a “go code”, and “Jeanne says hello” might be the “abort code”. Later this was simplified to just the positive code (a tradition that, apparently, the CIA still follows).

Doing It Wrong

There are problems when codes are used inconsistently. For example, some mafia codes used oblique references to the boss as “aunt”, or “Aunt Julia”. This was very ineffective when the mafioso suffered pronoun slippage and called their “aunt” “he”.

  • “Ah, Aunt Julia said he wanted to help me out, too.”

Codes Gone Wild

I’ve collected some examples of real al Qaida codes that were used actively used prior to the 9/11 attacks. Other types of basic open code are “business code”, which is also used by some criminal groups, where the actors are refered to as business interests or rivals, and criminal activities are described as “projects” or other innocuous business terms.

A simple code that was used by two KGB operatives was the phrase “I think we should go fishing now”, which indicated that they should discuss business.

KGB Says What?

During the early stages of the KGB handling of their FBI penetration Hanssen, they had a mishap with locating and loading the deaddrop for his payment. To correct this error, they had to contact Hanssen by phone and use a code that was not pre-arranged (there was no contingency in place for “what happens if we cant find the dead drop”). The dead drop location was underneath a footbridge and the KGB operative had placed his load underneath the wrong corner.

Since they had used a pretext of purchasing a used car for their initial contact, the KGB continued to use that pretext for their “oops!” communique. The KGB operative prepared his telephone conversation thoroughly before hand so that it would sound natural and plausible:

KGB: The car is still available for you as we have agreed last time, I prepared all the papers and left them on the same table. You didn’t find them because I put them in another corner of the table.

Hanssen: I see

KGB: You shouldn’t worry, everything is okay. The papers are with me now.

Hanssen: Good

KGB: I believe under these circumstances, its not necessary to make any changes concerning the place and time. Our company is reliable, and we are ready to give you a substantial discount which will be enclosed in the papers. Now, about the date of our meeting. I suggest that our meeting will take place without delay on Febuary 13, one, three, 1:00 PM. Okay? Feburary 13

Hanssen: …. Okay.

The conversations is clearly stilted and strange, but no so strange as to draw attention to itself. It also doesn’t reveal anything of the meaning that is being relayed.

Signaling Codes

When creating a signaling code, it is important that the pretext for the signal be broad and widely applicable. Generally it is better that the code be a specific subject, rather than a specific phrase. Phrases are easy to mixup, forget, or otherwise confuse. They are also more rigid and hard to work into a conversation. A subject, on the other hand, is very easy to raise and discuss in a plausible fashion without seeming forced or unnatural.

A final short code example. This is a signaling code, adapted from a novel, however it accurately conveys how simple these codes can be. This is phone call between two colleagues, where Alice has to signal an emergency has occured:

Alice: Hi, sorry to call so late

Bob: No problem

Alice: Is our meeting scheduled for tomorrow at 8:30, or at 9?

Bob: It is 8:30, bright and early.

Alice: Ok, right. Just checking. Thanks, bye

Open Codes Fail Open

When using a code to refer to a classified subject, even though unclassified terms are used, the subject is still classified. This is a breach of security. See the US Army handbook on COMSEC section dealing with ATTEMPTS TO DISGUISE INFORMATION (Section 8.4).

“Talking around” is a technique in which you try to get the information across to the recipient in a manner you believe will protect it. However, no matter how much you try to change words about a classified or sensitive subject, it is still classified or sensitive.

self-made reference system. This is an attempt to encipher your conversation by using your own system. This system rarely works because few people are clever enough to refer to an item of information without actually revealing names, subjects, or other pertinent information that would reveal the classified or sensitive meaning

These are concerns to keep in mind when developing a code system for discussing sensitive information.

Final Thoughts

Codes: keep them generic, keep them consistent, limit their use to simple signalling.

In Search of OPSEC Magic Sauce

Of Bomb Threats and Tor

Recently (December 16th, 2013) there was a bomb threat at Harvard University, during finals week. The threat was a hoax, and the FBI got their man that very night. The affidavit is here.

This post will look at the tools and techniques the operative used to attempt to hide his actions, why he failed, and what he should’ve done to improve his OPSEC. As a hint: I provided an outline of what he should’ve done 6 months ago in ignorance is strength.

Disclaimer: This post is to outline why OPSEC is so difficult to get right, even for people who go to Harvard. I am not encouraging any illegal behavior, but instead analyzing how OPSEC precautions can be so difficult to get right. Don’t send bomb threats.

Key Takeaways

  1. The phases of an operation
  2. Counterintelligence (“know your enemy”) as a factor in operational design
  3. Avoid reducing the set of suspects
    • If all students are suspects, all one needs to do is avoid narrowing the pool of potential suspects

Strategic Objectives: Avoid Final Exam

Strategically, the principal behind this operation (Eldo Kim) was attempting to avoid taking a Final Exam scheduled for the morning of December 16th. To accomplish his objectives he designed an operation that would cause an evacuation of the building where he was to take his final. Rather than recruit an agent and delegate the execution of the operation, the principal decided to do it himself.

This was not an enlightened decision.

The Structure of All Things (for values of Things = “Operations”)

All offensive operations share a similar core structure. This structure has been known for a long time in the military, but is rarely applied in other fields. Operations have distinct phases that they move through as they progress from vague idea, to concrete plan, through execution and, finally, onto the escape.

The outline framework for an operation, all of the phases, is the following:

  1. Target Selection
  2. Planning (and Surveillance)
  3. Deployment
  4. Execution
  5. Escape and Evasion

This framework is frequently used when dissecting a terrorist attack post mortem, allowing the security forces to identify the agents involved in each phase. Ideally, the security forces want to remove the people involved in the Target Selection and Planning stages. These people tend to be the principals, and are more valuable than the agents who actually perpetrate the attack.

For hacker groups, the operational phases are rarely acknowledged, and followed in an ad hoc manner. Primarily because few hackers are aware of them. It would be beneficial for hackers to understand the structure of preparing an operation thoroughly, but that is an issue we’ll address another day.

As an aside, it is worth noting that these operational phases apply to a consultancy making a sale, providing a service, dropping a deliverable, and then vanishing. ;)

College Kids are Inexperienced, News at 11.

All real criminals know that the most important part of an operation is the get away, the git (as it used to be called). Of course, real criminals don’t go to Harvard University (although there’s an argument to be made that some graduate from there), and so poor Eldo Kim had no one to teach him the criticality of the final stage of an operation: Escape and Evasion.

Operation “Doomed to Failure”

The operative used an ad hoc approach to his operational design, and as a result he made a fatal error. Here is his operational plan:

  • Obtain Tor Browser Bundle
  • Select target email addresses “randomly” [see para 11]
  • Compose email
  • For each target email address
    • Create new GuerillaMail “account”
    • Send email (using this)

For security, the operative chose to rely on a pseudonymous email tool and the Tor anonymity network. He used the Tor Browser Bundle on OSX rather than the TAILS distribution (see: para 11). Provided he closed the tab between each session, there should be no forensic evidence left on the laptop.

NOTE: When using Tor Browser Bundle close all the tabs and exit the application when you are done. The TBB will clean up thoroughly after itself, but only on exit! When you are done, shut it down. Runa’s paper explores this in detail.

Phase 1: Target Selection

The strategic target was the hall hosting the final exam. Tactically, the principal selected “email addresses at random” to receive a bomb threat intended to force an evacuation of the hall, along with a number of other cover locations.

Phase 2: Planning

This step appears to have been focused solely on the technical requirements of masking the origination of the threatening emails. However, insufficient resources were devoted to this phase, and therefore it was fundamentally flawed.

Here is the email he sent:

shrapnel bombs placed in: 

science center 
sever hall 
emerson hall 
thayer hall 

2/4. guess correctly. 

be quick for they will go off soon

Clearly he intended to provide cover locations, and he attempted to prolong the bomb search by suggesting that some locations where legitimately bomb free. It is standard operating procedure for bomb threats to be investigated thoroughly and in parallel.

Phase 3: Deployment

The operative chose to use GuerrillaMail to send the emails, and because GuerrillaMail reveals the source IP of the sender, he also chose Tor to mask his IP address. However, he used a monitored network to access Tor, which severely limits the anonymity provided by Tor. This error was to prove fatal.

Phase 4: Execution

Kim used the Harvard University wifi network. To gain access, he had to login with his username and password. The university monitors and logs all network activity. This was the fatal error. He authenticated to the network, his IP was used to access Tor, and this information was logged.

When the incident was investigated the FBI was able to pull the logs and determine not just whether anyone had accessed Tor, but exactly who had accessed Tor.

Phase 5: Escape and Evasion

There was nothing at all done for this phase. It is worth noting that there is little he could have done to prepare for an interview by seasoned professional FBI interrogators. As an amateur, he stood approximately zero chance of surviving.

Counterintelligence: Know your Adversary

A study of the investigation methods used by the law enforcement officials engaged to investigate bomb threats would have been beneficial for Mr Kim. He would have realized that they would target the likely suspects, attempt to narrow the suspect pool down to the minimum set, then start interviewing. The more strongly the evidence points to a set of suspects, the more aggressive the interviews will be. From “do you know anything about…” to “We have all the evidence we need, why don’t you make it easy for yourself?”

Initially the suspects for the case would have been any student scheduled to take an exam at one of the targeted halls. This is doubtless a large number, and without any specific information to go on, the chance of interviewing all of them is slim. If, however, the FBI did interview all of them, the questioning would be general and undirected, rather than specific and probing. An amateur, like Kim, who kept his cool and simply denied any knowledge of the hoax would have had a reasonable chance of evading suspicion.

Knowing the investigative techniques of his adversary would have allowed Kim to design an operation that provided for a reliable escape and evasion phase. He would have used an unmonitored network, in an unmonitored location near by the school, to send his threats. This would have left the suspect pool extremely large – “everyone”.

When planning an operation, know how the adversary will respond. This will allow you to factor that response into your planning. If you do not know how your adversary will respond, then their response will be a surprise. Do not allow the reactive force to surprise you.

There is no OPSEC magic sauce

The content and context of the threat make it clear that the originator of the emails was a student (or possibly a professor/TA trying to avoid grading exams). The important thing to hide is which student, not that it was a student. Therefore simply using a nearby cafe with free wifi should have been sufficient to mask the specific identity of the operative. Assuming:

  • there are cafes that do not know the operative by sight,
  • there are cafes that are not monitored by CCTV (wear a hat, don’t look up),
  • that he wore a simple disguise to reduce the recall of the witnesses (look generic), and
  • that a college kid in a cafe at 8am during Finals week is not unusual

Using Tor from the college campus was a fatal error. The pool of suspects was immediately reduced to “everyone that used Tor during the time the bomb threats were sent”. Since Silk Road v1 has been shut down, that is obviously going to be a small number.

Lets call it half a win

Strategically, the operation was successful. Eldo Kim will not have to take his final exam. Or, indeed, other final exams he might not be prepared for. However, it is hard to imagine this is the outcome he was hoping for.

Suggested Reading Runa’s analysis of the Harvard Bomb Hoax

Yardbird’s Effective Usenet Tradecraft

Survival in an Extremely Adversarial Environment

If your secure communications platform isn’t being used by terrorists and pedophiles, you’re probably doing it wrong. – [REDACTED]

A few years ago a group of child pornographers was infiltrated by police who were able to monitor, interact, and aggressively investigate the members. Despite engaging in a 15 month undercover operation, only one in three of the pedophiles were successfully apprehended. The majority, including the now infamous leader Yardbird, escaped capture. The dismal success rate of the law enforcement officials was due entirely to the strict security rules followed by the group.

This post will examine those rules, the reasons for their success, and the problems the group faced which necessitated those rules.

(An examination of the group’s security from a slightly different perspective was conducted by Baal and is available here)

Covert Organizations, Seen One, Seen ‘em All

All covert organizations face a similar set of problems as they attempt to execute on their fundamental mission – to continue to exist. A covert organization in an adversarial environment faces a number of organizational challenges and constraints. Fundamentally how it handles trade-offs between operational security and efficiency mandates how group members perform their operational activities. Strong OPSEC means low efficiency, while high efficiency necessitates weak OPSEC. The strength of the oppositional forces dictate the minimum security requirements of the covert organization.

Examining the operational activities – those actions the organization must engage in to self perpetuate – allows us to evaluate their operational security decisions within their environmental context.

Operational Activities:

The Yardbird child abuse content group (hereafter also called the enterprise) had a number of core goals that had to be addressed to continue operation: they needed to distribute their child abuse content to members; communicate between members; raise funds to acquire new content; recruit new members (presumably for access to additional child abuse content).

Explicitly stated, this is an enumerated list of the operational activities that the group had to engage in to self perpetuate.

  1. Distribution of Child Abuse Content
  2. Communication and Coordinate Action
  3. Fund raising
  4. Recruitment and Vetting

Except for the first issue (strategically significant only to this group), these are pretty typical activities for a clandestine organization. Besides their defining operational activity, they need a communications channel, fund raising capability, and membership management processes.

Opposition Success: The Penetration

The law enforcement authorities caught a pedophile distribution child abuse content. He is a member of the Yardbird group and offers up complete access to the group, along with archival logs, in exchange for leniency.

All of the information about this group comes from the Castleman Affidavit, the Baal analysis, and some Baal follow ups.

A Frustrating Infiltration

The law enforcement authorities were about to completely penetrate the enterprise for a 15 month period from 2006-08-31 through 2007-12-15. During that time the group’s posted 400,000 images and 1,1000 videos. The enterprise had approximately 45 active members, although independent observers have claimed this is low with the real membership anywhere from 48 to 61.

The total number of arrests was 14, or somewhere around 1/3rd. A fully staffed, highly motivated, well trained adversarial force with complete penetration of a large complacent group was only about to achieve a one in three success rate. The majority of those successes were achieved due to group members being insufficiently cautious and violating the enterprise security rules. Obviously, these security rules are extremely resilient against adversarial assault.

The members who were caught were those who violated the security SOP of the group:

  • Accessing a newsgroup server without using Tor (e.g. VPN, or directly)
  • Revealing personal details about themselves
  • Contacting each other outside the group’s secure comms channel

Operational Activity: Distribution

The enterprise was careful to ensure that the location of the encrypted files containing child abuse images was a different newsgroup from the communications newsgroup. One possible reason is to unlink the obvious encrypted group discussion from the larger encrypted content posts. That is, they compartmented their commo from their file sharing. As an additional, although superfluous step, the enterprise would apparently alter the sequence number of the split binary uploads so that reassembly would be hampered. What this cumbersome step added beyond the existing PGP encryption is unclear (if your adversary can break PGP they can probably figure out the order some files).

Operational Activity: Communications

The enterprise would use the primary newsgroup, at the start of the investigation alt.anonymous.messages, to announce the location of a media cache for group members. The communications newsgroup is always reserved strictly for communications. The announcements regarding new downloads provided detailed instructions as to the location of the child abuse content, plus how to download, assemble and decrypt it.

The group used a single shared PGP key for all members. On the one hand, this would completely negate the security provided by PGP if the key falls into the wrong hands. It also limits the groups ability to expel a member who transgresses the rules and needs to be punished. On the other hand, the use of a shared key makes key management significantly easier which is a serious concern when you need to rekey every few months. Additionally, using only one key reduces the ability of the adversary to determine group size by examining the PGP packets. It also removes the potential for a group member to reuse a key that is linked to their real identity. See this excellent presentation for more details on those attacks.

Operational Activity: Recruitment

The enterprise expanded by allowing new members to join. There were clear guidelines, procedures and rules for expansion. First there was a background check to ensure that the prospective member was an established and active participant in the wider community of child abuse image traders. Then an existing member has to invite the prospect to the group. Finally, to demonstrate both their deep involvement in the activity and to prove they are not an undercover cop, they must pass a timed written test on the minutiae of various child abuse victims and media.

Vetting

  • Demonstrate active participation in the “trading scene”
  • Invited by existing member
  • Must exhibit deep domain specific knowledge via timed written test

Security Rules that Work

  • Never reveal true identity to another member of the group
  • Never communicate with another member of the group outside the usenet channel
  • Group membership remains strictly within the confines of the Internet
    • No member can positively identify another
  • Members do not reveal personally identifying information
  • Primary communications newsgroup is migrated regularly
    • If a member violates a security rule, e.g. fails to encrypt a message
    • Periodically to reduce chance of law enforcement discovery
  • On each newsgroup migration
    • Create new PGP key pair, unlinking from previous messages
    • Each member creates a new nickname
      • Nickname theme selected by Yardbird

Root of Success

The reason the majority of the group was able to avoid capture was in a small way due to the technology they were using (Tor), but primarily it was adherence to the security rules of the group. They had very good OPSEC and they followed it consistently. Fundamentally, they had complete compartmentation within the group – they did not reveal information to each other. The law enforcement authorities were able to get logs of all their communications traffic, plus logs of their IP addresses they used for posting. Everyone that used Tor (as per the recommendation of Yardbird) was anonymous at the IP layer. This protected them from a subpoena revealing their identity. As long as there was no additional information that they had revealed about themselves in their messages, they were secure against the opposition.

The use of PGP was essentially a No-OP in this case. It excluded the general public from accessing the content of the communications traffic (and the child abuse videos and images). It did not protect the traffic against analysis by the opposition (who had successfully infiltrated the group). The encryption was not a factor in their successful evasion. Rather, it was the content of the messages, controlled and dictated by the security rules, which protected their secrets.

Lessons Learned

Guarding secrets involves not sharing them. Encryption can only ever protect the content of a communique. Real security must start with the content itself, and then use encryption as an additional layer.

Note from the Editor

(Feel free to skip this part if you don’t think studying how child pornographers avoid capture is relevant)

When analyzing the activities of groups operating in an adversarial environment to learn what works, what doesn’t, and why, (unfortunately) the pool of covert organisations is somewhat limited: intelligence agencies; terrorist groups; hacker crews; narcos; insurgents; child pornographers… Few other groups face such a hostile operating environment that their security measures are really “tested”.

The group examined in this post had an incredibly effective set of security practices. They imposed strict compartmentation, regularly migrated identities and locations, required consistent Tor and PGP use, etc. They had legitimate punishments for people who transgressed the rules (expulsion) and they survived a massive investigation effort. Clearly, they were doing something right (actually a number of things). Just as clearly, they are reprehensible people who engage in activity that is immoral and unethical, by any measure. (Paying for child pornography to be produced is flat out wrong, regardless on where you stand on the spectrum of opinions regarding child porn laws).

The thing is, there are basically no nice people who provide case studies of OPSEC practices. Most are engaged in violence, serious drug trafficking (at the “kill people for interfering” level), theft and manipulation of human beings, etc. Thats the nature of the beast.

People with well funded, trained and motivated adversaries have the strongest incentives to practice the highest level of security. They’re the ones to learn from.

How to Win at Kung Fu and Hacking

Everybody Was Hack Foo Fighting

I’m going to discuss a serious problem with the organisational structure and social dynamics of the hacker community, and why this puts hackers at risk. Hackers operate essentially the same way as the henchmen in a kung fu movie: they attack the adversary one by one by one… always losing. This is a terrible way of developing a robust core of knowledge about which OPSEC techniques work, which techniques fail, and why.

Organisational Learning for Dummies

There are two types of knowledge: individual, and organisational. Hackers are very individualistic, and the knowledge they acquire tends to be very practical; experience based. There are few hacker organisations that seek to collect, retain, test and spread knowledge. The organistations that do crop up are either some zines, which are knowledge artefacts that transmit techne, or hacker groups, which share tool chains and experience. However, these hacker groups have very short lifespans (measured in months and single digit years, not decades). They are compartmented in that there is some effort made to retain the group’s proprietary information, but internally they usually have a very poor security posture. They are social groups in many ways, so they are heavily compromised. As we say in infosec “crunchy on the outside, chewy in the middle”.

Their opposition, the intelligence agencies and law enforcement departments, have decades of organisational history and knowledge. The individual members can display wide ranges of skill and competence, but the resources and core knowledge of the organisation dwarf what any individual hacker has available. Many of the skills that a hacker needs to learn, his clandestine tradecraft and OPSEC, are the sort of skills that organisations are excellent at developing and disseminating. These are not very good skillsets for an individual to learn through trial and error, because those errors have significant negative consequences. An organisation can afford to lose people as it learns how to deal with the adversary; an individual cannot afford to make a similar sacrifice – afterall, who would benefit from your negative example?

Challenges? More Like Opportunities!

Hackers are facing some very serious challenges now:

  • they lack organistations for collecting intelligence and knowledge about their adversary;
  • they face off against the adversary one at a time,
  • they learn very poorly from prior mistakes
  • they don’t even know what skills they need,
  • and perhaps most dangerously, they aren’t even aware they’re in the game

It is amusing how many people think that interrogations involve violence and torture. Successful elicitation far more frequently involves whiskey, flattery, playing dumb, and being doubtful (”really? I didn’t know it was possible to do that. You must be pretty damn smart to have figured it out…”).

Winning at Secrets

There needs to be more information available on the techniques used during investigations, as well as before they begin. There needs to be documentation on how to evade those techniques, and why those evasions are successful. That knowledge needs to be captured and dissemminated out to those who can use it.

Required Reading

This is a short list of articles and papers that you absolutely must read if you want to understand OPSEC.

  • Terrorist Group Counterintelligence :: This is the thesis which later became the book Terrorism and Counterintelligence. Read at least one of them (the thesis is free).

  • Allen Dulles’s 73 Rules of Spycraft :: This is the handbook of how to live and operate securely. It is 50 years old and it has aged remarkably well. Read it. Study it. This will be on the test.

  • Clandestine Cellular Networks :: This paper deals primarily with the lessons learned from fighting insurgents, but it is extremely valuable as a handbook on tradecraft. I previously posted just the tradecraft chapter for people who don’t want to slog through all of it. I suggest reading all of it.

  • The Terrorists Challenge: Security, Efficiency, Control :: This paper examines the primary trade offs that need to be made when operating a covert organisation. If you have multiple people working in secret, managing them and their work requires making tradeoffs between security, efficiency and control. This paper will help you to understand those tradeoffs.

Optional