Although there has been a lot of FUD written about the encrypted messaging systems developed and promoted by jihadis groups, very little has focused on the how they are actually used. I wrote some notes about this earlier but wanted to expand on the subject in more depth.
There are a number of internet web forums that are used by supporters of the various jihadi groups fighting in the middle east. These sites are primarily cheerleading and “in grouping” social networks, rather than opertational message boards.
An important point to understand about these online forums is that they are about group dynamics. They provide a mechanism for people to feel like they are part of the struggle with a graded scale of committment. They dont actually need to worry about getting their hands dirty or risking their lives (technically, they might be risking their lives and freedom).
The sites all attempt to educate their users on security best practices, for example the Islamic State (nee ISIS) web forum heavily promotes the use of TAILS, AQAP advocates for Tor usage in a 9 page guide. Despite this, few users actually bother with security precautions. Indeed, many continue to use Facebook and Skype as their primary communications channels with fellow online jihadists.
The encryption tools are branded software for self identifying jihadis to feel like they belong. Indeed, other than the media outlets who emphasise the use of the tools (branding and messaging), the actual jihadis have a hard time using the tools. Actual web jihadis complain of usability problems that prevent them from using the tools.
The media outlets for the different groups: IS, Nusra, AQ, all make sure that their followers know about their own branded encrypted messenger. Indeed, this is the primary clue to how these apps are actually used. They are branding tools that promote in-group sentiment. “I’m using the AQ encrypted messenger, so I am basically AQ”. These tools deliberately identify the user as a jihadi associate, not by accident or due to bad security practice, but rather as a deliberate part of their value proposition. “Use our encrypted messaging app and you will securely let the world know that you are with us!”
All of the major apps are simply branded wrappers around industry standard libraries, ciphers, and protocols. There is nothing particularly Islamic or Jihadist about them except the branding. That is because the branding is actually the point. These are just social signals. Using AQAP’s messaging tool is the rough equivalent of wearing a sports jersey. It signals to others that there is group identity. (Of course, given the outlaw nature of these groups it seems like an extremely poor life decision)
These apps are not designed for actual clandestine operational use. They are for making a social statement. Signaling membership in a peer group. Despite this simple purpose for using the apps, there is still remarkably low uptake amongst the online jihadist set who still primarily rely on Facebook and Skype for comms.
So if almost no one is using the encryption apps, and those that do are using them to signal membership in a broader organisation, what are the real jihadis using operationally? Facebook.
There was a Facebook account “sniper outside the law” that was posting clear text, but coded, messages believed to be related to jihadi operations in Tunisia. The account has been taken down and the guy running it was arrested.
Here are some examples of what he was posting (taken from here):
1
|
|
1
|
|
1 2 3 |
|
1 2 3 |
|
The key take away is that the encrypted messaging apps from ISIS or AQAP are as operationaly relevant as an encrypted messaging app from Man U or Liverpool. It might be exciting for some hardcore fans who want to show their support, but the real players don’t touch the stuff.
Real jihadis use secure codes and couriers, not some Android toy My First Crypto Chat.
Must Read: An article by Kryt3ia (published minutes before me, the swine!)
]]>A British man, Lauri Love, has been indicted for hacking. The indictment is thin on details, but does have some interesting OPSEC insights that can be teased out by the patient reader.
The indictment of Lauri Love doesn’t reveal much about how he was identified. There is some interesting info about the operational security measures taken by his crew, and they appear robust. The lack of information on how Mr Love was caught, along with the revelation of good security practices suggests one thing: informant.
This post will only highlight the good operational security practices of the hacker group, since we don’t know what the mistakes were.
The indictment lists four members of the crew:
If I were to venture a guess, I’d reckon that CC-1 was caught first and became the informant used to take down the crew. I think this because CC-1 has the most specific geographic information, and the others are more vague in their location. As if there was a lot of effort invested in locating CC-1, and then the investigation focussed in on Mr Love.
nsh
on IRC discussing NETCOM hack with CC-1, later w/ CC-2October 10, 2012: LOVE discusses ACC hack on IRC
October, 2013: End of the conspiracy
The crew used scanners to locate vulnerable servers to exploit, and they shared the findings via their IRC.
peace: so can pivot and scan for other vulns [vulnerabilities]
peace: we might be able to get at real confidential shit
The crew used SQLI and ColdFusion exploits.
The crew used proxies and Tor
to mask the origins of their attacks.
conceal their attacks by disguising, through the use of Proxy Servers, the IP addresses from which their attacks originated. Defendant LOVE and the other Co-Conspirators further used the Tor network, which was an anonymizing proxy service, to hide their activities.
The crew moved comms to new systems and changed their identities when they did so. This is a very good practice. Unfortunately, it appears that at least one member was logging the comms traffic. This created a security problem that could be exploited by the authorities.
route: consideration 1 : behaviour profile should not change
route: public side i mean
route: so whatever "normal", activities we do
route: should continue
route: but we move from this irc to better system
route: also
route: these nicks should change
route: i think
route: when we get on new communications
route: all new names
OPSEC Violation: No logs, no crime. Do not keep any unnecessary logs. If there is operationally critical information, make a record of that information. Practically, this means: cut and paste into a file; keep that file encrypted.
OPSEC Lesson: Migrating communications infrastructure and changing identities regularly is a good idea. It creates chronologically compartmented silos of info that limit the impact of a compromise. It can provide plausible deniability, and it can reduce the severity of a compromise. Do not contaminate between the compartments. And, of course, ensure that each commo channel is secure.
For at least some operations (all?) the crew spun up a new dedicated support server. This compartmented server was then discarded after use to minimize the connection to the group and any other operations. This is very effective OPSEC.
CC#2: but server must have no link to you or us
peace: :)
CC#2: when done we kill it
CC#2: for this plan
CC#2: we can reopen another one for other ongoing stuff
CC#2: but once this plan done we need to make sure they cannot all trace it back to us
OPSEC Lesson: Compartment as much as possible for each operation to avoid linking separate ops together. This also helps contain the damage if an operation is compromised and an investigation launched. Dedicated logistical infrastructure is best. Don’t forget to santize it, both at the beginning and the end of the op.
Even a group with robust operational security practices is vulnerable to the oldest trick in the book: the informant. The take away lessons are slightly more interesting:
So it is sad news for Mr Lauri Love facing hacking charges, but at least there’re some valuable OPSEC lessons for the rest of us. Remember: No logs, no crime.
]]>X-Mailer: iPhone Mail (9A405)
From: the grugq <thegrugq gmail com>
Subject: Re: [redacted: name + title of the guilty talk]
Date: Thu, 5 Jan 2012 11:05:12 +0700
To: [conference committee list]
>> I have a different take on it [redacted-name]. I feel there is a lot of new
>> security research and work being done out there but it is being hidden
>> by the flood of introductory/survey/low-value talks. With 1,791 infosec
>> talks at cons record in 2010 (source: http://cc.thinkst.com/statistics/)
>> as an industry we've fucked ourselves and have elevated the role of a
>> speaking spot at a conference to something mythical and special when in
>> reality it has been watered down to the level that we've seen thus far
>> with the submissions to [this conference]
I agree to a large extent with this analysis, but I think there is
another facet that hasn't been brought up yet, which I call the "Episode
17a Ensign #3" problem.
(I'll be incendiary first, so if you're impatient you can stop reading
now and start flaming.)
Essentially (most) security cons are comic / star trek conventions, but
with less cosplay and even fewer girls. The conference talk might be
styled (somewhat) on the academic lecture, but realistically the
audience would rather a Steve Jobs style product unveiling than a
lecture. They want some background info to ground themselves and align
expectations, then they want the big product reveal at about 40 minutes
in; and for a real treat, a "one more thing". (for product unveiling
see demo; and don't forget the tool release: "available right now, you
can download this today,... and hack the shit out of something")
This is entertainment, it is not knowledge transfer.
• most regional cons would be vastly improved as informal peer training
activities focused events. Like the LUGs and Python groups and so on.
Regular meetings to actively do something with a few "event centric"
talks thrown in as part of the evenings entertainment but also to guide
the discussions and activities along. That's how you get people
learning shit, have them actually do it. Novel concept, eh? ;)
• the big cons get big names cause they have a symbiotic relationship.
And it doesn't require any backhanded arrangements; as a researcher with
a new topic to present, you're faced with two choices: blow your wad at
NoNameRegional Con, or save it for MassiveMediaExposure con in 4 months.
Guess which one will work more towards getting you laid?
This is why the big cons get the hit singles and the small cons get
supporting acts and "best of greatest hits" talks. It's part of why I
think conferences aren't helping the community very much.
• other problems include the high value that original research
frequently has, far in excess of the cost of the price of a ticket and
hotel... This makes independent researchers inclined to maximize value
on the market directly, rather than indirectly through conference driven
reputation building. For employees, they're in a similar situation
except their employers want to minimize liability and maximize ROI on
their big name researcher. So they aren't keen to release anything super
awesome, for free, at a con (i.e. someone else's branded event).
So that leaves a reduced set of potential speakers, combined with an
incentive to present something sufficiently interesting to provide
entertainment but not sufficiently useful enough that it decreases in
value. Note: I say these are incentivized behaviors, not what everyone
(or anyone) does or wants to do.
• as a conference that isn't swamped with submissions, that means you
have to be proactive. For SyScan Taiwan 2011, we made a hit list of
topics we wanted, and another list of people who were either subject
matter experts on a target topic, or whom we wanted to meet up with. We
then spent about 6 weeks chasing every single speaker down personally
and inviting them to speak. In the end, if you see our line up, I think
it is fair to say this is an effective strategy for getting an AllStar
line up.
Obviously this isn't effective at finding new talent, because you can't
chase down someone you don't know exists).
That's why we, as a community need breeder events that help to make the
existing conferences stronger by finding the new talent, encouraging
them to develop their technical skills and their presentation skills
(they got to learn to entertain an audience for an hour, ). Presenting a
bit of research at the local security meetup is a good start to a career
of talking about typing on a keyboard...
Oh right, so how we're all just at a cosplay-free comic con.
So the one hour talk format isn't good for knowledge transfer, it
rewards entertainers more than pure researchers. This leads to a few
super rockstars who deliver(ed) the goods, and know how to do a product
unveil at 42 minutes into their slot. This ends with a few Shatneresque
rockstars and loads of "ensign #3 from episode 17a, the one where
Shatner massaged the heap for an hour and then dropped shells all over
everything, it was the first time he did a multiple root in public. So
cool!!!"
The 1 hour presentation format is completely shit for knowledge
transfer. I hold by the barcon inspiring theory that your new research
is either simple enough that you can explain it over a beer(ie .5min of
content) or something so complex that I want the white paper version to
work through at my own pace. There is genuine frustration at the
(frequently) horrible Product Unveil style talks which take an hour to
reveal 5 minutes of content.
On the other side is the frustration at talks which are made up of
potentially interesting info, but the slide deck is all lolcats, the
code is never released, and the presenter never writes up the white paper.
]]>The NYPD created an operations formula for conducting undercover investigations on social media. The procedural document reveals the operational security for these investigations. The security is founded on the use of an “online alias” (the officer’s undercover account) and strict compartmentation. Given the capabilities of the adversaries that the NYPD faces this is probably sufficient security.
It is a fascinating glimpse into the operational process of an investigation. Definitely worth reading to get a sense of what the police face when conducting an online investigation (hint: paperwork).
Fundamentally this is basic operational security grounded on compartmentation. The use of dedicated hardware, and pseudononymous internet access, allows the officer to create and operate an online undercover account without any links to the NYPD. The basic security precautions are designed to protect the officer’s laptop from being compromised. A compromised laptop could enable the adversary to conduct a counterintelligence investigation.
This is very basic stuff, but should be more than sufficient against the adversaries that the NYPD pursues. These adversaries should not have access to any of the records of the phone company supplying the internet access.
Here is the information that is required to create the undercover account:
- Username (online alias)
- Identifiers and pedigree to be utilized for the online alias, such as email address, username and date of birth.
- Do not include password(s) for online alias and ensure password(s) are secured at all times.
- Indicate whether there is a need to requisition a Department laptop with aircard.
- Review photograph to be used in conjunction with online alias, if applicable.
- Consider the purpose for which the photograph is being used and the source of the photograph.
Here is the full section dealing with operational security:
Operational Considerations
When a member of the service accesses any social media site using a Department network connection, there is a risk that the Department can be identified as the user of the social media. Given this possibility of identification during an investigation, members of the service should be aware that Department issued laptops with aircards have been configured to avoid detection and are available from the Management Information Systems Division (MISD). A confidential Internet connection (e.g., Department laptop with aircard) will aid in maintaining confidentiality during an investigation. Members who require a laptop with aircard to complete the investigation shall contact MISD Help Desk, upon APPROVAL of investigation, and provide required information.
In addition to using a Department laptop with aircard, members of the service are urged to take the following precautionary measures:
- Avoid the use of a username or password that can be traced back to the member of the service or the Department;
- Exercise caution when clicking on links in tweets, posts, and online advertisements;
- Delete “spam” email without opening the email; and
- Never open attachments to email unless the sender is known to the member of the service.
Furthermore, recognizing the ease with which information can be gathered from minimal effort from an Internet search, the Department advises members against the use of personal, family, or other non-Department Internet accounts or ISP access for Department business. Such access creates the possibility that the member’s identity may be exposed to others through simple search and counter-surveillance techniques.
Undercover operations online rely on very basic operational security. Primarily compartmentation and reviews to ensure that the account isn’t going to be associated with the NYPD.
]]>This is a short collection of notes I’ve put together on how you can be identified via your mobile phone. If you want to securely use a mobile phone, you’ll need to use a burner. This is non-trivial. Here’s a good guide.
Mobile phones should primarily be used for signalling, rather than for actually communicating operational information. Remember the golden rule of telephone conversations:
Know how to turn the phone to a completely off state. This means removing the battery, taking out the SIM card and placing in a shielded bag (if possible). This really off state is how you store and transport the phone when not in use.
A note on storage: it should not be at your house or anywhere that is directly linked to you.
Where you use the phone is itself very important. Never use it at locations which are associated with you, that means never at home, never at the office/work, never at a friend’s house. Never have the phone in an ON state at locations that are associated with you, or your immediate social network. Never.
Do not turn the phone in the same location as a phone associated with you. Make sure that your real phone is somewhere else, but not in an OFF state if possible. You don’t want the disappearance of one phone from the network to coincide with the appearance of another. Paired events are indicators of relation, and you want to avoid those as much as possible. You also want you regular phone to appear with a typical usage pattern, which means keeping it on as you normally would.
Never use different phones from the same location.
Never carry phones for different compartments together (keep them turned off, batteries out)
Never carry phones turned on over the same routes you normally take. Avoid patterns and predictability.
]]>The goals of secure communications are the following. Some of these are surprisingly difficult to achieve:
The first and second objectives can be accomplished using some combination of cryptography and coding. Unfortunately, this is the easy part. The more complicated and difficult component of a secure communications infrastructure is achieving the third and fourth objectives. For now, however, I will focus only on the first two issues: protecting content, and meaning.
First lets define our terms so we can discuss the subject with clarity:
Codes are extremely useful mechanisms for sending small messages, although as they are plain text their hidden mean can be revealed once the key is cracked. Another issue with codes is that they are inflexible, compared to a cipher system. Coding requires pre-arranged mappings of meanings (what symbols or words translate to what), or at least pre-arranged mechanisms to derive the mappings (e.g., book codes).
To be effective, a code must maintain proper grammar, be consistent, and fit a plausible pretext. If it fits these requirements, and is used appropriately (briefly, consistently, with cover for action) then a code system is an excellent choice for simple signalling purposes.
During World War II the BBC cooperated with the intelligence services to send open code signals to operatives in the occupied territories. These signals were prearranged with the operatives, and then sent out at two scheduled times. This signalling channel was used exclusively for indicating whether an operation was going to take place.
The BBC would broadcast the signal for the first time at 1930, and then confirm the signal at 2115. If the operation had been canceled before the second scheduled signal window, the code phrase would not be repeated.
During the early phase of the war, the code system was slightly more complex. There would be a positive code, and a negative code, for example: “Jeanne sends her greetings” might be a “go code”, and “Jeanne says hello” might be the “abort code”. Later this was simplified to just the positive code (a tradition that, apparently, the CIA still follows).
There are problems when codes are used inconsistently. For example, some mafia codes used oblique references to the boss as “aunt”, or “Aunt Julia”. This was very ineffective when the mafioso suffered pronoun slippage and called their “aunt” “he”.
I’ve collected some examples of real al Qaida codes that were used actively used prior to the 9/11 attacks. Other types of basic open code are “business code”, which is also used by some criminal groups, where the actors are refered to as business interests or rivals, and criminal activities are described as “projects” or other innocuous business terms.
A simple code that was used by two KGB operatives was the phrase “I think we should go fishing now”, which indicated that they should discuss business.
During the early stages of the KGB handling of their FBI penetration Hanssen, they had a mishap with locating and loading the deaddrop for his payment. To correct this error, they had to contact Hanssen by phone and use a code that was not pre-arranged (there was no contingency in place for “what happens if we cant find the dead drop”). The dead drop location was underneath a footbridge and the KGB operative had placed his load underneath the wrong corner.
Since they had used a pretext of purchasing a used car for their initial contact, the KGB continued to use that pretext for their “oops!” communique. The KGB operative prepared his telephone conversation thoroughly before hand so that it would sound natural and plausible:
KGB: The car is still available for you as we have agreed last time, I prepared all the papers and left them on the same table. You didn’t find them because I put them in another corner of the table.
Hanssen: I see
KGB: You shouldn’t worry, everything is okay. The papers are with me now.
Hanssen: Good
KGB: I believe under these circumstances, its not necessary to make any changes concerning the place and time. Our company is reliable, and we are ready to give you a substantial discount which will be enclosed in the papers. Now, about the date of our meeting. I suggest that our meeting will take place without delay on Febuary 13, one, three, 1:00 PM. Okay? Feburary 13
Hanssen: …. Okay.
The conversations is clearly stilted and strange, but no so strange as to draw attention to itself. It also doesn’t reveal anything of the meaning that is being relayed.
When creating a signaling code, it is important that the pretext for the signal be broad and widely applicable. Generally it is better that the code be a specific subject, rather than a specific phrase. Phrases are easy to mixup, forget, or otherwise confuse. They are also more rigid and hard to work into a conversation. A subject, on the other hand, is very easy to raise and discuss in a plausible fashion without seeming forced or unnatural.
A final short code example. This is a signaling code, adapted from a novel, however it accurately conveys how simple these codes can be. This is phone call between two colleagues, where Alice has to signal an emergency has occured:
Alice: Hi, sorry to call so late
Bob: No problem
Alice: Is our meeting scheduled for tomorrow at 8:30, or at 9?
Bob: It is 8:30, bright and early.
Alice: Ok, right. Just checking. Thanks, bye
When using a code to refer to a classified subject, even though unclassified terms are used, the subject is still classified. This is a breach of security. See the US Army handbook on COMSEC section dealing with ATTEMPTS TO DISGUISE INFORMATION (Section 8.4).
“Talking around” is a technique in which you try to get the information across to the recipient in a manner you believe will protect it. However, no matter how much you try to change words about a classified or sensitive subject, it is still classified or sensitive.
self-made reference system. This is an attempt to encipher your conversation by using your own system. This system rarely works because few people are clever enough to refer to an item of information without actually revealing names, subjects, or other pertinent information that would reveal the classified or sensitive meaning
These are concerns to keep in mind when developing a code system for discussing sensitive information.
Codes: keep them generic, keep them consistent, limit their use to simple signalling.
]]>Recently (December 16th, 2013) there was a bomb threat at Harvard University, during finals week. The threat was a hoax, and the FBI got their man that very night. The affidavit is here.
This post will look at the tools and techniques the operative used to attempt to hide his actions, why he failed, and what he should’ve done to improve his OPSEC. As a hint: I provided an outline of what he should’ve done 6 months ago in “ignorance is strength”.
Disclaimer: This post is to outline why OPSEC is so difficult to get right, even for people who go to Harvard. I am not encouraging any illegal behavior, but instead analyzing how OPSEC precautions can be so difficult to get right. Don’t send bomb threats.
Strategically, the principal behind this operation (Eldo Kim) was attempting to avoid taking a Final Exam scheduled for the morning of December 16th. To accomplish his objectives he designed an operation that would cause an evacuation of the building where he was to take his final. Rather than recruit an agent and delegate the execution of the operation, the principal decided to do it himself.
This was not an enlightened decision.
All offensive operations share a similar core structure. This structure has been known for a long time in the military, but is rarely applied in other fields. Operations have distinct phases that they move through as they progress from vague idea, to concrete plan, through execution and, finally, onto the escape.
The outline framework for an operation, all of the phases, is the following:
This framework is frequently used when dissecting a terrorist attack post mortem, allowing the security forces to identify the agents involved in each phase. Ideally, the security forces want to remove the people involved in the Target Selection and Planning stages. These people tend to be the principals, and are more valuable than the agents who actually perpetrate the attack.
For hacker groups, the operational phases are rarely acknowledged, and followed in an ad hoc manner. Primarily because few hackers are aware of them. It would be beneficial for hackers to understand the structure of preparing an operation thoroughly, but that is an issue we’ll address another day.
As an aside, it is worth noting that these operational phases apply to a consultancy making a sale, providing a service, dropping a deliverable, and then vanishing. ;)
All real criminals know that the most important part of an operation is the get away, the git (as it used to be called). Of course, real criminals don’t go to Harvard University (although there’s an argument to be made that some graduate from there), and so poor Eldo Kim had no one to teach him the criticality of the final stage of an operation: Escape and Evasion.
The operative used an ad hoc approach to his operational design, and as a result he made a fatal error. Here is his operational plan:
For security, the operative chose to rely on a pseudonymous email tool and the Tor anonymity network. He used the Tor Browser Bundle on OSX rather than the TAILS distribution (see: para 11). Provided he closed the tab between each session, there should be no forensic evidence left on the laptop.
NOTE: When using Tor Browser Bundle close all the tabs and exit the application when you are done. The TBB will clean up thoroughly after itself, but only on exit! When you are done, shut it down. Runa’s paper explores this in detail.
The strategic target was the hall hosting the final exam. Tactically, the principal selected “email addresses at random” to receive a bomb threat intended to force an evacuation of the hall, along with a number of other cover locations.
This step appears to have been focused solely on the technical requirements of masking the origination of the threatening emails. However, insufficient resources were devoted to this phase, and therefore it was fundamentally flawed.
Here is the email he sent:
shrapnel bombs placed in:
science center
sever hall
emerson hall
thayer hall
2/4. guess correctly.
be quick for they will go off soon
Clearly he intended to provide cover locations, and he attempted to prolong the bomb search by suggesting that some locations where legitimately bomb free. It is standard operating procedure for bomb threats to be investigated thoroughly and in parallel.
The operative chose to use GuerrillaMail to send the emails, and because GuerrillaMail reveals the source IP of the sender, he also chose Tor to mask his IP address. However, he used a monitored network to access Tor, which severely limits the anonymity provided by Tor. This error was to prove fatal.
Kim used the Harvard University wifi network. To gain access, he had to login with his username and password. The university monitors and logs all network activity. This was the fatal error. He authenticated to the network, his IP was used to access Tor, and this information was logged.
When the incident was investigated the FBI was able to pull the logs and determine not just whether anyone had accessed Tor, but exactly who had accessed Tor.
There was nothing at all done for this phase. It is worth noting that there is little he could have done to prepare for an interview by seasoned professional FBI interrogators. As an amateur, he stood approximately zero chance of surviving.
A study of the investigation methods used by the law enforcement officials engaged to investigate bomb threats would have been beneficial for Mr Kim. He would have realized that they would target the likely suspects, attempt to narrow the suspect pool down to the minimum set, then start interviewing. The more strongly the evidence points to a set of suspects, the more aggressive the interviews will be. From “do you know anything about…” to “We have all the evidence we need, why don’t you make it easy for yourself?”
Initially the suspects for the case would have been any student scheduled to take an exam at one of the targeted halls. This is doubtless a large number, and without any specific information to go on, the chance of interviewing all of them is slim. If, however, the FBI did interview all of them, the questioning would be general and undirected, rather than specific and probing. An amateur, like Kim, who kept his cool and simply denied any knowledge of the hoax would have had a reasonable chance of evading suspicion.
Knowing the investigative techniques of his adversary would have allowed Kim to design an operation that provided for a reliable escape and evasion phase. He would have used an unmonitored network, in an unmonitored location near by the school, to send his threats. This would have left the suspect pool extremely large – “everyone”.
When planning an operation, know how the adversary will respond. This will allow you to factor that response into your planning. If you do not know how your adversary will respond, then their response will be a surprise. Do not allow the reactive force to surprise you.
The content and context of the threat make it clear that the originator of the emails was a student (or possibly a professor/TA trying to avoid grading exams). The important thing to hide is which student, not that it was a student. Therefore simply using a nearby cafe with free wifi should have been sufficient to mask the specific identity of the operative. Assuming:
Using Tor from the college campus was a fatal error. The pool of suspects was immediately reduced to “everyone that used Tor during the time the bomb threats were sent”. Since Silk Road v1 has been shut down, that is obviously going to be a small number.
Strategically, the operation was successful. Eldo Kim will not have to take his final exam. Or, indeed, other final exams he might not be prepared for. However, it is hard to imagine this is the outcome he was hoping for.
Suggested Reading Runa’s analysis of the Harvard Bomb Hoax
]]>If your secure communications platform isn’t being used by terrorists and pedophiles, you’re probably doing it wrong. – [REDACTED]
A few years ago a group of child pornographers was infiltrated by police who were able to monitor, interact, and aggressively investigate the members. Despite engaging in a 15 month undercover operation, only one in three of the pedophiles were successfully apprehended. The majority, including the now infamous leader Yardbird, escaped capture. The dismal success rate of the law enforcement officials was due entirely to the strict security rules followed by the group.
This post will examine those rules, the reasons for their success, and the problems the group faced which necessitated those rules.
(An examination of the group’s security from a slightly different perspective
was conducted by Baal
and is available here)
All covert organizations face a similar set of problems as they attempt to execute on their fundamental mission – to continue to exist. A covert organization in an adversarial environment faces a number of organizational challenges and constraints. Fundamentally how it handles trade-offs between operational security and efficiency mandates how group members perform their operational activities. Strong OPSEC means low efficiency, while high efficiency necessitates weak OPSEC. The strength of the oppositional forces dictate the minimum security requirements of the covert organization.
Examining the operational activities – those actions the organization must engage in to self perpetuate – allows us to evaluate their operational security decisions within their environmental context.
The Yardbird child abuse content group (hereafter also called the enterprise) had a number of core goals that had to be addressed to continue operation: they needed to distribute their child abuse content to members; communicate between members; raise funds to acquire new content; recruit new members (presumably for access to additional child abuse content).
Explicitly stated, this is an enumerated list of the operational activities that the group had to engage in to self perpetuate.
Except for the first issue (strategically significant only to this group), these are pretty typical activities for a clandestine organization. Besides their defining operational activity, they need a communications channel, fund raising capability, and membership management processes.
The law enforcement authorities caught a pedophile distribution child abuse content. He is a member of the Yardbird group and offers up complete access to the group, along with archival logs, in exchange for leniency.
All of the information about this group comes from the Castleman Affidavit,
the Baal
analysis, and some Baal
follow ups.
The law enforcement authorities were about to completely penetrate the enterprise for a 15 month period from 2006-08-31 through 2007-12-15. During that time the group’s posted 400,000 images and 1,1000 videos. The enterprise had approximately 45 active members, although independent observers have claimed this is low with the real membership anywhere from 48 to 61.
The total number of arrests was 14, or somewhere around 1/3rd. A fully staffed, highly motivated, well trained adversarial force with complete penetration of a large complacent group was only about to achieve a one in three success rate. The majority of those successes were achieved due to group members being insufficiently cautious and violating the enterprise security rules. Obviously, these security rules are extremely resilient against adversarial assault.
The members who were caught were those who violated the security SOP of the group:
The enterprise was careful to ensure that the location of the encrypted files containing child abuse images was a different newsgroup from the communications newsgroup. One possible reason is to unlink the obvious encrypted group discussion from the larger encrypted content posts. That is, they compartmented their commo from their file sharing. As an additional, although superfluous step, the enterprise would apparently alter the sequence number of the split binary uploads so that reassembly would be hampered. What this cumbersome step added beyond the existing PGP encryption is unclear (if your adversary can break PGP they can probably figure out the order some files).
The enterprise would use the primary newsgroup, at the start of the investigation
alt.anonymous.messages
, to announce the location of a media cache for group
members. The communications newsgroup is always reserved strictly for communications.
The announcements regarding new downloads provided detailed instructions as to
the location of the child abuse content, plus how to download, assemble and decrypt it.
The group used a single shared PGP key for all members. On the one hand, this would completely negate the security provided by PGP if the key falls into the wrong hands. It also limits the groups ability to expel a member who transgresses the rules and needs to be punished. On the other hand, the use of a shared key makes key management significantly easier which is a serious concern when you need to rekey every few months. Additionally, using only one key reduces the ability of the adversary to determine group size by examining the PGP packets. It also removes the potential for a group member to reuse a key that is linked to their real identity. See this excellent presentation for more details on those attacks.
The enterprise expanded by allowing new members to join. There were clear guidelines, procedures and rules for expansion. First there was a background check to ensure that the prospective member was an established and active participant in the wider community of child abuse image traders. Then an existing member has to invite the prospect to the group. Finally, to demonstrate both their deep involvement in the activity and to prove they are not an undercover cop, they must pass a timed written test on the minutiae of various child abuse victims and media.
The reason the majority of the group was able to avoid capture was in a small way due to the technology they were using (Tor), but primarily it was adherence to the security rules of the group. They had very good OPSEC and they followed it consistently. Fundamentally, they had complete compartmentation within the group – they did not reveal information to each other. The law enforcement authorities were able to get logs of all their communications traffic, plus logs of their IP addresses they used for posting. Everyone that used Tor (as per the recommendation of Yardbird) was anonymous at the IP layer. This protected them from a subpoena revealing their identity. As long as there was no additional information that they had revealed about themselves in their messages, they were secure against the opposition.
The use of PGP was essentially a No-OP in this case. It excluded the general public from accessing the content of the communications traffic (and the child abuse videos and images). It did not protect the traffic against analysis by the opposition (who had successfully infiltrated the group). The encryption was not a factor in their successful evasion. Rather, it was the content of the messages, controlled and dictated by the security rules, which protected their secrets.
Guarding secrets involves not sharing them. Encryption can only ever protect the content of a communique. Real security must start with the content itself, and then use encryption as an additional layer.
(Feel free to skip this part if you don’t think studying how child pornographers avoid capture is relevant)
When analyzing the activities of groups operating in an adversarial environment to learn what works, what doesn’t, and why, (unfortunately) the pool of covert organisations is somewhat limited: intelligence agencies; terrorist groups; hacker crews; narcos; insurgents; child pornographers… Few other groups face such a hostile operating environment that their security measures are really “tested”.
The group examined in this post had an incredibly effective set of security practices. They imposed strict compartmentation, regularly migrated identities and locations, required consistent Tor and PGP use, etc. They had legitimate punishments for people who transgressed the rules (expulsion) and they survived a massive investigation effort. Clearly, they were doing something right (actually a number of things). Just as clearly, they are reprehensible people who engage in activity that is immoral and unethical, by any measure. (Paying for child pornography to be produced is flat out wrong, regardless on where you stand on the spectrum of opinions regarding child porn laws).
The thing is, there are basically no nice people who provide case studies of OPSEC practices. Most are engaged in violence, serious drug trafficking (at the “kill people for interfering” level), theft and manipulation of human beings, etc. Thats the nature of the beast.
People with well funded, trained and motivated adversaries have the strongest incentives to practice the highest level of security. They’re the ones to learn from.
]]>I’m going to discuss a serious problem with the organisational structure and social dynamics of the hacker community, and why this puts hackers at risk. Hackers operate essentially the same way as the henchmen in a kung fu movie: they attack the adversary one by one by one… always losing. This is a terrible way of developing a robust core of knowledge about which OPSEC techniques work, which techniques fail, and why.
There are two types of knowledge: individual, and organisational. Hackers are very individualistic, and the knowledge they acquire tends to be very practical; experience based. There are few hacker organisations that seek to collect, retain, test and spread knowledge. The organistations that do crop up are either some zines, which are knowledge artefacts that transmit techne, or hacker groups, which share tool chains and experience. However, these hacker groups have very short lifespans (measured in months and single digit years, not decades). They are compartmented in that there is some effort made to retain the group’s proprietary information, but internally they usually have a very poor security posture. They are social groups in many ways, so they are heavily compromised. As we say in infosec “crunchy on the outside, chewy in the middle”.
Their opposition, the intelligence agencies and law enforcement departments, have decades of organisational history and knowledge. The individual members can display wide ranges of skill and competence, but the resources and core knowledge of the organisation dwarf what any individual hacker has available. Many of the skills that a hacker needs to learn, his clandestine tradecraft and OPSEC, are the sort of skills that organisations are excellent at developing and disseminating. These are not very good skillsets for an individual to learn through trial and error, because those errors have significant negative consequences. An organisation can afford to lose people as it learns how to deal with the adversary; an individual cannot afford to make a similar sacrifice – afterall, who would benefit from your negative example?
Hackers are facing some very serious challenges now:
It is amusing how many people think that interrogations involve violence and torture. Successful elicitation far more frequently involves whiskey, flattery, playing dumb, and being doubtful (”really? I didn’t know it was possible to do that. You must be pretty damn smart to have figured it out…”).
There needs to be more information available on the techniques used during investigations, as well as before they begin. There needs to be documentation on how to evade those techniques, and why those evasions are successful. That knowledge needs to be captured and dissemminated out to those who can use it.
]]>Terrorist Group Counterintelligence :: This is the thesis which later became the book Terrorism and Counterintelligence. Read at least one of them (the thesis is free).
Allen Dulles’s 73 Rules of Spycraft :: This is the handbook of how to live and operate securely. It is 50 years old and it has aged remarkably well. Read it. Study it. This will be on the test.
Clandestine Cellular Networks :: This paper deals primarily with the lessons learned from fighting insurgents, but it is extremely valuable as a handbook on tradecraft. I previously posted just the tradecraft chapter for people who don’t want to slog through all of it. I suggest reading all of it.
The Terrorists Challenge: Security, Efficiency, Control :: This paper examines the primary trade offs that need to be made when operating a covert organisation. If you have multiple people working in secret, managing them and their work requires making tradeoffs between security, efficiency and control. This paper will help you to understand those tradeoffs.
Reading this interview with the prosecutor of Robert Morris Jr about the Morris Worm there are a few cool OPSEC lessons we can learn.
One way was with computer forensics. Tracing back the source of the worm. The second way was one of Morris’s friends told The New York Times in response to some articles that John Markoff was writing he inadvertently gave his initials.
There were a couple of ways that he was discovered. The first was the forensic analysis of the worm itself, and tracing that back to the original infection point. This sort of evidence shows where to look (the original infection), but it does not provide enough information to successfully prosecute. It is circumstantial so far, and given some careful sanitisation of the original box, it would be a very hard case to prove.
The far more damaging way that Morris was caught was via an OSINT case officer doing HUMINT collection (a reporter interviewing people about the worm). The journo managed to elicit information about the worm’s author (his initials). This is the sort of extremely damaging information leakage that happens when there is poor OPSEC. There was no anti-interrogation training provided to the members of the Morris cell (i.e. all his friends who knew about the development of the worm).
he did testify that he wrote the worm. He came in and testified, “I did it, and I’m sorry.” I turned to my co-counsel and asked, “Should I prove he didn’t do it or he’s not sorry?”
When the prosecution has to prove that you committed a felonious act, it is a lot easier for them when you confess on the stand. I can’t second guess the decisions of Morris’ legal counsel, but unless you are instructed to do so by your lawyer: STFU.
We talked to his friends. His friends were witnesses for us. They didn’t have a choice. There was a core group. …one of the meetings where Robert Morris was discussing the worm occurred at a Legal Seafood in Kendall Square… He talked about how it was developed, how it worked, what vulnerabilities it exploited. At one point he was at a meeting back at Harvard, he got so excited that he literally jumped up on a table pacing back and forth on the table explaining how it worked…
The close friends of Robert Morris, the Morris Cell, were fully briefed on all aspects of the worm. Its capabilities, its functionality, and its author’s real identity. None of the other members of the cell were actively exposed to the risks of the operation. They had no “need to know”.
This failure to STFU, to properly compartment the design and development of the worm, was a key factor leading to his capture and prosecution. Fortunately, things worked out well for him, in the long run.
The rule of thumb is: if someone is actively sharing the risk, they have a need to know. This need to know is, of course, restricted to only those aspects of the operation in which they are actively involved.
]]>The goal of OPSEC is to control information about your capabilities and intentions to keep them from being exploited by your adversary.
In typical hacker fashion, the term OPSEC has come to mean more than just information about capabilities and intentions, but also personal information about the yourself.
A common source for the idea that “security through obscurity is bad” is Kerckoffs’ principle which states that: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge
. OPSEC as a system of security is sometimes confused with “security through obscurity”. This is not the case. Such thinking reflects a confusion of both the problem with opaque security systems and the foundations of OPSEC.
The way to clear this confusion, I believe, is to point out that OPSEC is a security system, not any one specific practice. The system itself is open source, in that we know how and why the various techniques and practices work. For example, the tradecraft technique of a dead drop is public knowledge. The security of a dead drop is not that no one knows how they work, but rather the adversary does not know where a specific dead drop is locate, nor when that dead drop is being serviced (loaded or unload). That information, primarily the location of that dead drop, is the secret key to the dead drop security system. This information is what must remain secret for the dead drop to remain secure.
So OPSEC as a system of security does not violate Kerckoff’s principle, and is not “security through obscurity”. The specifics of any one application of OPSEC techniques provide security, but those are analogous to the private key to the system. If they are compromised, then security they provide will be be compromised.
]]>Learning good OPSEC requires internalizing the behavioural changes required to continually maintain a strong security posture. The operational activities have to become habit, because the small things matter, and every careless mistake can compromise security. The only way to develop good OPSEC habits, good security hygiene, is to practice. Make the foolish beginners mistakes during a practice session, rather than in the field. Two relevant sayings:
After developing good security hygiene habits, the second most difficult thing about good OPSEC is learning patience. Increased OPSEC security comes at the cost of efficiency, primarily in communication time-frames. The OPSEC mechanisms that must be in place to reduce the risks during communication add latency. As a result, communication takes significantly longer and is less reliable. Obviously, this is more of an issue with time sensitive operations than those that have more generous deadlines.
The single greatest security risk is communication between operatives. Clandestine agencies, such as the CIA, MI6, DGSE, etc. will work incredibly hard to minimize the risks surrounding communication with their recruited agents. In the simplest form, this involves a 2-4 hour “surveillance detection route” (SDR) to see if they are “in the black” before they perform any operational activity. This is on top of the hours of planning for the operation itself (note: these are minimums, operations requiring high security might take weeks or months of planning, and 12 hour SDRs).
The technology that exists to facilitate information security, e.g. encryption, is important, but it is not sufficient or even the starting point for robust OPSEC. By all means, learn to use encryption software correctly and in a properly secure fashion. However, it is more important to compartment sensitive activities and structure your operational environment for impact containment than install use particular software.
]]>NOTE Events have overtaken my slow writing speed. This post was in the works before the Silk Road bust in September 2013. I’m uploading it anyway because it has some useful information, however there seems little point in finish it now.
The dealers on Silk Road ship a large amount of illegal products around the world, and it is clear that they’re successful at it. However, the US Postal service has been aware that drug dealers user their service for shipping illegal substances and has developed guidelines for determining suspect packages efficiently. Unfortunately for them, those guidelines have leaked and this allows someone abusing the US mail as an illicit distribution channel to evade the USP’s checks.
The actual guidelines for suspicious packages list a number of major indicators that the inspectors look for. This guide is somewhat outdated, and a revised version has also be leaked. In both cases, the triggers and the reasoning behind them are similar.
Anything that looks like someone is sending slightly over an even metric weight of something, from a known suspect location, to another person, in an old heavily taped package with a fake return address. Sounds like bad tradecraft.
Main points to take away:
Don’t make shit up, do your research and steal an identity with a real address.
When creating a cover, make sure it is as fully fleshed out as possible. This means developing supporting evidence to bolster the validity of the cover. In intelligence lingo, this is called backstopping.
A backstopped cover is one where checks to verify the authenticity of the cover story are verifiable. For example, if the cover story includes a name, there are matching identity documents; if there is an phone number, it connects to someone who will substantiate the cover story; if there is an address, it exists. The old Soviet illegals used to spend years developing their cover and backstopping them. They’d live for a few years in a country they claimed to be immigrating from, so they would have the memories, experience and verifiable evidence that they were from there.
If you are going to use a cover (you probably should), then put in the effort to create a backstop. The complexity and depth of that backstop are dependant on how deeply the cover will be investigated. Remember though, it is better to have too much, than not enough…
]]>Generally, it appears that Ross Ulbricht was applying his economic and techno-libertarian philosophy to real life. As his project grew, his security posture improved – too late. The most serious mistakes that Ross Ulbricht made were made during the period Jan 2011 - Oct 2011. A full timeline of the events in the Complaint is available on my tumblr.
NOTE: This is an abridged version of a longer post pulling out the lessons learned from the Silk Road Complaint of 27th September 2013. This post will only list the OPSEC errors, rather than explore them in detail.
The fundamental error is poor compartmentation. Ross Ulbricht, the real person and the online persona (Google+, LinkedIn, etc), and the Dread Pirate Roberts persona share ideological views and geographic locations. There is contamination between the two personas. Most of these seem to be due to the organic evolution of the Silk Road venture, where early naive Ulbricht makes mistakes that later smarter DPR wouldn’t. Unfortunately, the later DPR is more ideologically extreme and consequently less savvy about mainstream society.
The compartmentation failures are somewhat pervasive, in particular the ideological “Austrian School of Economics” and the mises.org site. However two particular contamination errors stand out:
The first of these failures happened because the altoid persona used to promoted Silk Road was poorly fleshed out (e.g. no email address). Ross did not put the plumbing in place to backstop his altoid cover. He then joined the BitcoinTalk community using this contaminated cover. His participation and search for social validation left him with his guard down. Consequently, he revealed a great deal of profiling information about his project and beliefs. Many of his posts are about Silk Road infrastructure or his mises.org influenced economic theories. After participating for 10 months he finally made the fatal OPSEC error of posting his personal email address.
The second error was poor compartmentation of his online Ross Ulbricht persona, the tech savvy San Francisco based startup guy, and “frosty” the system admin of the server hosting the Silk Road site. His poor compartmentation, likely using the same computer for both personal and business use, and his limited backstopping of the DPR/altoid/frosty persona meant that any error would be fatal.
These two errors combine to link Silk Road with Ross Ulbricht, and Ross Ulbricht with Silk Road.
Ross Ulbricht, the person, was an active participant in the mises.org website and the BitcoinTalk forums. In both cases he was deeply committed to the “Austrian School of Economics”, something the Dread Pirate Roberts was also a huge fan of. The altoid cover alias, linked directly to Ross Ulbricht, frequently talked about bitcoin security and PHP programming. He is, based on his posts, clearly invovled in running some sort of PHP based bitcoin using venture that requires high security. Sort of like the Silk Road site.
15240 cm
(500 ft
) from a location that accessed the Ross Ulbricht GMail account.The location of the Dread Pirate Roberts was something of an open secret. It is clear that he was based in the west coast of the US. Ulbricht was located in San Francisco at the same time as DPR, as proved by his large online footprint: Google+, YouTube, GMail.
After the altoid persona is retired from BitcoinTalk, Ulbricht migrates his social interaction to a more extreme community: the Silk Road forums. This appears to have been his “scene”, where he interacted with people and cultivated friends (including an impressive array of undercover law enforcement officials).
The underground life forced on Ulbricht as the Dread Pirate Roberts led to the major problem of isolation. Human beings are social animals. We require social interaction to maintain a healthy mental state. The strict security of DPR required isolation, leaving Ross Ulbricht living his social life on forums with niche ideological views, initially BitcointTalk (in 2011) and then the Silk Road forums. Isolation from mainstream society is known to lead to ideological extremism as members of the niche community self-reinforce their ideological tendencies. Consequently, they are less able to understand mainstream society’s ideas, beliefs and morals. This is dangerous. This isolation leads him to rationalize hiring online hitmen to preserve the Silk Road community is morally acceptable.
Apparently the only source of social validation and ego gratification that Ross had was a group of bitcoin libertarians, drug seekers, drug dealers and undercover cops. This is not a healthy social environment conducive to a balanced state of mental health.
So, the Dread Pirate Roberts Complaint basically tells us nothing that we didn’t already know about OPSEC. There are some lessons learned which can be used to harden OPSEC practices going forward. The main things are still: strong compartmentation; use Tor all the time; avoid leaking profiling information, and it is prudent to regularly migrate to new cover personas.
]]>This Vice article provides the source of the information for this blog post. Using some basic background knowledge on how covert groups operate, it is simple to parse and analyze the drug delivery service tradecraft.
a friend of mine solicited hardcore drugs for a Manhattan drug kingpin, who was looking for a new pot delivery guy. My friend encouraged me to try out for the job.
As with many covert groups, the recruitment process relied on personal connections. This social network grounded approach to expanding a covert organisation is generally good for initial security. The recruits are unlikely to be agents sent to infiltrate the organistation as the long standing social ties between members and recruits both establishes trust and serves as vetting.
Developing a covert organisation based on social network ties provides a means of rapid expansion and easy security clearance. The downside is that once a single member of the organisation is compromised, the adversarial security forces can easily roll up the whole network. The poor compartmentation of a social network based covert organisation is its Achilles heel. The security of the organisation is critically dependent on the security of each individual member.
ProTip: Expand your covert network with individuals who are passionate about your ideological beliefs. Ensure strong compartmentation, starting with recruitment.
He asked me to provide documentation of my current address and phone number as an insurance policy. If I ran out on him, he warned me he’d hold my friends responsible for the deficit funds and/or drugs.
The principal of the organisation “Nathan” requires that the recruit provide a verifiable address and means of contact, along with dire warnings of consequences in the case of infractions. This is very basic control principles, typical of covert organisations.
The major security problem with this approach, of course, is that the records maintained by the network’s principal are a high value target for the adversary. Compromise of the principal’s records will lead to total collapse of the network, and interdiction for every member involved. There is no chance of evasion.
ProTip: No logs, no crime. Do not keep records of the members of your covert organisation. These records are extremely sensitive.
the transaction and exit should be as swift as possible. “You aren’t here to hang out,” she said. “It’s not a social call, and they aren’t your friends. You want to walk in and be friendly and make conversation but also get to the business at hand and get out of there quickly.”
The illicit operation, the drug sale, is intended to be rapid and minimize the period of vulnerability for both parties. Interestingly, this is possibly a poor choice if the threat is surveillance. There are few reasons a random individual would enter a domicile for a short duration. Also of note, the covert organisation provides no reasonable cover story for why the agent (the drug courier) is entering the residence of the client. A simple “what were you doing?” type question would likely completely blow the whole operation.
ProTip: Minimising the period of vulnerability improves the chances of operational success. Always make sure your agents are capable of delivering plausible cover stories. Cover for action
Nathan forced me to wear a button-up shirt and slacks, shave my face, and keep my hair conservatively short. He believed this uniform would attract little attention as I walked around with thousands of dollars worth of pot in a laptop case slung over my shoulder.
The covert organisation has, surprisingly enough, chosen to enforce a uniform
that makes their agents blend in with the mainstream. This is completely inline
with the typical operational disguises employed by covert organistations operating
in controlled territory the world over. (See: Moscow Rules go with the flow
;
Murphy’s Laws of War: don't stand out, it draws fire
)
ProTip: They got this one exactly right.
Although I used my flip-phone constantly at work, I was never given clients’ addresses over the phone. Clients calls would go to a dispatcher—a third party who took the call, traced the number through a database of numbers, and then returned the call from a different phone to confirm their request for drugs. After their request was confirmed, I received a call from another phone. The dispatcher only told me, “You got Nick,” or “You got Lucy.” I was banned from responding with anything besides a murmured “OK.”
Each operational use of the phone provides the adversary with minimal value. There is a unique identifier for the client (e.g. “Lucy”), and the agent acknowledges receipt of the directive (“OK”). The dispatchers interaction with the client is itself run over multiple phone lines and kept to short, simple, normal statements.
ProTip: This is very much inline with all covert organisations’ guidelines for using phones. Never use keywords, keep the content as vague as possible, minimize the period of vulnerability – get off the phone!
Each day I was given a stipend of $40 for cabs. No one knew if I didn’t spend the $40. Instead of taking cabs, I ran around in a frantic state that negated every other measure I took to not draw unwanted attention
This is an instance of preference divergence, a common problem for covert organisations. The financial resources provided to the agent of the principal are siphoned off and directed towards non-operational uses (the drug courier skims and pockets his cab stipend.) There doesn’t appear to be any consequence to this operational security failure, however it jeopardizes the entire organisation. If “Nathan” were a more disciplined principal he would monitor his agents more closely and ensure they are conforming to the organisational security requirements. Strangely, drug dealers are not strict disciplinarians.
ProTip: if the securit of the entire organisation is dependent on the security of each individual agent – enforce the operational security requirements strictly!
I shook his hand and said, “I’m Jack.” He gave me a knowing grin. “So that’s the name you’re using?” he asked.
The agent is using an alias to provide pseudonymity from malicious clients. This provides some minimal level of security. It is definitely better than not having any cover at all. However, as noted above, it should be combined with a robust cover story for why the agent is visiting a residential home for a brief period.
After a promotion, the drug courier decides to find a new line of work. If the organisation was stricter in their OPSEC practices, the departure of an agent wouldn’t place anyone else in jeopardy. As it stands, it seems clear that the agent who is now drawing attention to himself by writing about his experience in a national magazine(!) still retains sufficiently sensitive information to unravel the network.
ProTip: compartment early, compartment often. It is safer than any alternative.
Compartment your covert organisation from recruitment through to operational action so that when your agents leave or are compromised they are unable to compromise the organisation. Ensure that your operational activities have good cover for status (e.g. a disguise) and cover for action (e.g. a strong cover story). Strong compartmentation, strong cover, and be aware of the risks of using social networks for building a covert organisation.
]]>The Personal Onion Router To Assure Liberty is designed to protect the user by isolating their computer behind a router that forces all traffic over the Tor network.
The goal of the PORTAL project is to create a compartmented network segment that can only send data to the Tor network. To accomplish this the PORTAL device itself is physically isolated and locked down to prevent malicious tampering originating from the protected network. So if the user’s computer is compromised by malware, the malware is unable to modify the Tor software or configuration, nor can it directly access the Internet (completely preventing IP address leakage). Additionally, the PORTAL is configured to fail close – if the connection to Tor drops, the user loses their Internet access. Finally, the PORTAL is “idiot proof”, simply turn it on and it works.
The initial requirement was to develop PORTAL for a small personal sized router, such as the TP-Link 703N, 3040, or M1U. All of these devices are small, portable and support the OpenWRT open source router firmware. Unfortunately, it turns out that “small” and “portable” is synonymous with “weak” and “underpowered”.
Unfortunately, Tor is quite resource intensive for an embedded device. Tor uses 16MB of RAM
and for complete functionality (requiring the GeoIP database) it occupies slightly
over 1.2MB of squashfs
space. The stock TP-LINK routers have only 4MB of flash
and 16MB of RAM (later models have increased RAM). This caused a lot of problems
when building early versions. A bare bones OpenWRT system stripped down to just
support an Internet uplink USB device occupies 3.2MB of squashfs
space. Using
the power of math we see: 3.2 + 1.2 > 4.0
. Fuck.
Fortunately, the TP-LINK routers are not just small, they are also extremely hackable. They are very popular with hackers who have modified the hardware and expanded the capabilities of the stock device. I got in contact with a Chinese hacker who has upgraded the TP-LINK 703N to 16MB of flash and 64MB of RAM. Sweet. Using these modified routers development of the PORTAL became much much easier.
The PORTAL requires a minimum of two network interfaces: one for the Internet uplink, and one for the isolated network segment. In order to protect the PORTAL from tampering from malware (or malicious users), it also requires a third administration interface. This can be either a serial console, or physical connection. The reason not to use WiFi for the administration network is that that would expose the administration interface to anyone within WiFi range, including potentially the user’s compromised laptop’s WiFi card.
The requirement to protect the PORTAL from a malicious user caused some problems since the device hardware has very limited interfaces. The TP-LINK 703N has only:
* 1 x USB 2.0
* 1 x 100MB ethernet
* 1 x onboard wifi
All available interfaces are required to get us to the three networks we need:
* Tor: isolated proxy interface
* Tor SOCKS proxy
* Tor Transparent TCP proxy
* Tor Transparent DNS proxy
* DHCP (optional)
* Admin: configuration management interface
* ssh
* https (optional)
* DHCP (optional)
* Internet: uplink connection interface
* No services
After the user has configured the Internet
, and whatever other adjustments they
wish to make, they shouldn’t need to connect to the Admin
interface again. This
leaves us with a very hard target for any attacker who wishes to unmask us
(modulo any issues with Tor itself).
The PORTAL has been hardened to make it significantly more difficult for the user
to make a mistake, or for an attacker to subvert the Tor protections. From the
Tor
network the only exposed ports are Tor’s DNS proxy, TCP proxy, and SOCKS.
Optionally, you can use DHCP on this network.
If, somehow, the firewall doesn’t work properly, you’re still safe because the
PORTAL doesn’t actually route packets. The only way you can reach the Internet
(regardless of which interface you’re connected to) is via Tor. This stops stupid
mistakes, such as connecting to the Admin
interface and forgetting to swap to
the Tor
network. Don’t worry, you can’t do that, it won’t work, you’re welcome.
Final hardening is left up to the user who will have to assign the Admin
and
Tor
networks to physical interfaces. There are security trade offs either way.
Medium Security:
Tor
= WiFiAdmin
= EthernetMaximum Security:
Tor
= EthernetAdmin
= WiFiThe PORTAL project has been migrated to the RaspberryPi, which has more power to support Tor. It requires more configuration, which is something I’ll work on, however the ease of acquisition of the RPi makes this the current platform of choice. So go install PORTAL of Pi and compartment all of your sensitive operational activities inside an isolated Tor network.
]]>The responses all wandered slightly off topic from what my post was about. The point was that simply installing and running off the shelf counter-surveillance software is not sufficient against a nation state level adversary. Saying “Install Tor” or “Install I2P” is not the correct way to develop a counterintelligence program. It is not even the correct place to start. While those tools may be components of a CI program, but they are not sufficient in and of themselves.
To expand on what I was getting at in the post, the core issue is that when Tor and I2P and other countersurveillance solutions are developed, they are developed with certain assumptions about the capabilities of the adversary. For example, Tor does not work against an adversary who has total information awareness about the traffic on the Internet. The assumption for Tor is “adversary can monitor a subset of all IP traffic”, where subset usually equals “a single country”. Because we, the public, do not know the real capabilities of the adversary, those assumptions might be (and in some cases, likely are) completely incorrect. In this example, it is widely suspected that the US has the capability to monitor a significant portion of global IP traffic, not just limited to a single country. At a minimum we can assume that they will be able to get traffic logs for 5 eyes members, and most likely for all of NATO.
My article makes the claim that these off the shelf countersurveillance networks are insufficiently secure against nation state level adversaries. I also claim that we don’t know the capabilities of those adversaries, and therefore cannot know what technology would evade their surveillance capabilities. I stand by both claims.
My point regarding the cost of doubling the count of Tor exit nodes is simply that the financial cost of compromising the Tor network is not even a rounding error in a nation state budget. It is the equivalent of a portion of the change found in the couch. Further more, Tor is not new. It isn’t as if nation state level adversaries just woke up last week, “holy shit, this Tor thing! we better get on that!”. It is conceivable that a nation state has been setting up cover organisations, using agents, and compromising existing hosts for years with the sole goal of subverting the security of the Tor system. We have no way of knowing this because we have limited/no knowledge of their capabilities. Which was exactly my point.
To address the specific objections about “all smart Tor users know to encrypt traffic to combat malicious exit nodes”: yes malicious snooping nodes can be evaded provided you are using encryption to another termination point. This is why I’ve recommended using a VPN over Tor to mitigate against the monitoring that is done by evil exit nodes. However, an additional problem with a malicious exit node is simple traffic analysis, where the content of the data is irrelevant, but unmasking the end user is still possible. There are cases where unmasking an end user is sufficient, if they are going to “www.how-do-I-wage-jihad-in-the-usa.com.ir”, for example. If we take the case of a nation state level adversary who can monitor all IP traffic within their country, and we combine that with the same adversary operating (or monitoring) a significant percentage of exit nodes, then that adversary can trivially unmask Tor users. The cost of this operation would be well within the budget of any respectable intelligence agency.
Regarding risk of backlash if it is known that a nation state has compromised all (or many) ISPs: Firstly, we can all agree that the compromise of an ISP is well within the scope of an intelligence agency. If you have been around the underground long enough, you know how many different people and groups have compromised Tier 1 ISPs. But regarding the “backlash”, a nation state adversary will classify everything that could leak their tools, techniques and procedures. The means by which they collect information is usually as classified, or even more classified, than the information they collect. It is not likely that they would ever willingly allow this information to become known. Frequently intelligence agencies will classify information simply because revealing that they know it would reveal their collection capability, and thus compromise their ability to exploit that capability in the future.
Which is what brings me back to the point I was getting at in the post. If you are engaged in activities which will put you up against a nation state level adversary, you have no knowledge of what their capabilities are. Fortunately for just about everyone (reading this), you do not have a nation state level adversary. A law enforcement agency, such as the FBI, will have access to some nation state level capabilities in certain circumstances. For example, if it was known that a trained al Quaida cell was operating in the continental US and using Tor for their communications platform, the NSA would very likely use whatever Tor unmasking capability they have to assist the FBI. They would do this in a blackbox fashion: get a request -> send a response. They would not reveal how they performed the unmasking because the FBI would not have people who are cleared for that information. (This is compartmentation in action.)
As a thought experiment, imagine that Osama bin Laden was still alive and that he used the Tor network to do a Reddit AMA once a month. How long do you imagine it would take for the US to find and neutralize him? I posted this question on Twitter and, while responses varied, ex-NSA Global Network Exploitation Analyst Charlie Miller guessed one to two months. I would be very surprised if it took more than three. This is because OBL had a nation state level adversary. You (probably) do not.
There is good news, of course. Nation state level adversaries are concerned about nation state actors (and some non-nation state actors). They really don’t have the resources to spend monitoring law enforcement issues. Unless you are a policy maker, a ranking military official, an intelligence officer/agent, a member of a known terrorist organisation, or have somehow otherwise ended up on a targeting list, the Intelligence Community (IC) really doesn’t give a fuck about you. The product they produce for their clients - security cleared government officials - is documentation and analysis that helps these officials make informed policy decisions (or at least, that is the intention).
Now, as I advocate elsewhere, it is best to start your counterintelligence program early, because after you are targeted it is (usually) too late.
My central recommendation on how to operate safely, whether you are a hacker, a spy, a whistleblower, or whatever, is to implement compartmentation first. Classify the data which is sensitive (e.g. your real identity and anything linked to your real identity) and segregate it from everything related to your illicit activity. Preferably, by physically separating onto different machines. When conducting the illicit activity, use your illicit activity equipment, and do it over an internet link that cannot be linked to you. By all means, use Tor, or I2P, or a VPN, or whatever. But that technology must not be your primary and only line of defence.
This is how you do good CI. Develop a SOP that will protect your sensitive data even when things fail. That said, most of what will sink people is poor OPSEC, not poor SIGSEC. The more people that know about your illicit activity the higher the chance that Murphy will raise his head and it’ll all end in tears.
So, to reiterate, choosing a technology first and then relying on it for security is completely ass backwards. To do things properly, operate in this order. Figure out what you are trying to protect (and from whom), separate it from everything else, and then select tools, techniques and procedures that will enable you to protect it.
]]>Seven, this rule is so underrated
Keep your family and business completely separated
Guerrillas, terrorists, narcos and spooks the world over have learned the hard way how to keep their illicit activity safe from their opponents. The same principles of counterintelligence (CI) that help protect them from death can be applied to protect you from your adversary. If you engage in behavior that carries the risk of negative consequences from an adversary, you will need to develop and implement a robust CI program. This post will explain the foundations of strong OPSEC, a critical part of just such a program.
The cornerstone of any solid counterintelligence program is compartmentation. Compartmentation is the separation of information, including people and activities, into discreet cells. These cells must have no interaction, access, or knowledge of each other. Enforcing ignorance between different cells prevents any one compartment from containing too much sensitive information. If any single cell is compromised, such as by an informant, the limitats of the damage will be at the boundaries of the cell.
Now, compartmenting an entire organisation is a difficult feat, and can seriously impede the ability of the organisation to learn and adapt to changing circumstance. However, these are are not concerns that we need to address for an individual who is compartmenting their personal life from their illicit activity.
Spooks, such as CIA case officiers, or KGB illegals, compartment their illicit activity (spying) from their “regular” lives. The first part of this is, of course, keeping their mouths shut about their illicit activities! There are many other important parts of tradecraft which are beyond the scope of this post. But remember, when you are compartmenting your life, the first rule is to never discuss your illicit activities with anyone outside of that compartment.
This will cover a basic set of guidelines for compartmenting a particular online activity. In our hypothetical scenario there are two people, Alice and Bob (natch), who want to exchange information with each other. They are deathly afraid that the adversary will learn (in ascending order of risk to Alice):
While this guideline is a starting point for someone who seeks to conduct illicit activity under hostile internet surveillance it is not concrete set of rules. When developing a CI program you must evaluate the threats and risks to yourself and create a custom set of tools and procedures that address your needs. The specific SOP that you develop for will differ from the outline below, but if it is to be resilient against the adversary it must be based on some form of compartmentation.
Alice must purchase new dedicated equipment used exclusively for communicating with Bob. This means, buy a new laptop. Don’t bother with a new virtual machine, that isn’t sufficiently compartmented. Any existing equipment that Alice owns might already be compromised and is therefore not safe against potential monitoring.
The software installed should be the bare minimum of generic utilities required to do the communications. Here is an example setup:
This is the base platform that Alice will use when contacting Bob. Obviously, Bob should go through the same process (if he faces similar risks, or is concerned about Alice’s wellbeing).
The usernames and hostnames used should be generic, not associated with Alice’s real name, location, place of work, etc. If the VM is compromised, there will be no identifying information, or keys that can be used to decrypt previous comms. If the VM is escaped and the adversary has access to the host, again, there will be no identifying information. The host machine has only the virtualization software on it. Use full disk encryption on the host machine, probably on the VM, use different passwords between the two, and keep the machine fully powered off when not in immediate active use.
Number 5: never sell no crack where you rest at
I don’t care if they want a ounce, tell ‘em “bounce!”
Alice must ensure that every single time she contacts Bob, or checks for contact from Bob, she is in a location which is not linked to her. Additionally, she must use an internet connection which is not linked to her, for example a public WiFi or a prepaid 3G card.
When Alice goes to contact Bob, she must ensure that she does not carry any device which will transmit her physical location. For example, her mobile phone(s). Leave it at home.
After Alice has used her dedicated machine to communicate with Bob, she should revert the VM snapshot to the pristine state from right after she installed. This should limit the ability of the adversary to persist after a compromise (provided they didn’t escape the VM).
The converse-with-Bob machine must be used with new accounts created specifically for, and exclusively to, converse with Bob. These accounts must be created from the new machine, and never be used for anything else except Bob related activity. Alice must create new accounts that don’t have any links to her real identity. For email, one option is a TorMail account. For instant messaging there is either Cryptocat over Tor, or create a new Jabber account such as with jabber.ccc.de.
The core concept to take away here is: separate identity, with equipment and accounts, used only for one activity. The essense of compartmentation is separation without contamination. My strong recommendation is to use: a virgin machine, with virgin accounts, to contact the target. This machine is used exclusively for this one activity: it is compartmented. Associating the activity of that online entity, even with full and complete global internet monitoring (and 0day attacks) with a specific individual should be difficult. [NOTE: don’t count on this if you happen to be the new al Quaida #3].
]]>Back in the day we used to have AOL for internet access. If you’ve never suffered AOL, then you probably don’t know that it would disconnect you if the service didn’t detect any traffic for some period of time. It popped up an alert that said something like “no activity detected for 30 minutes. If there is no activity in the next 10 minutes, you will be disconnected”. When this dialog popped up my father would try to stay connected by moving the mouse around a bit. Obviously, this was completely ineffective.
The problem was his understanding of the problem was completely wrong. His mental model of how the whole system worked was so flawed, he was unable to identify the steps he had to take to actually solve his problem.
When I read articles and blog posts on “how to avoid surveillance”, or “how to stay anonymous online”, I am reminded of my father waving his mouse around to appease the dialog box, never understanding how completely wrong he was.
The publicly available tools for making yourself anonymous and free from surveillance are woefully ineffective when faced with a nationstate adversary. We don’t even know how flawed our mental model is, let alone what our counter-surveillance actions actually achieve. As an example, the Tor network has only 3000 nodes, of which 1000 are exit nodes. Over a 24hr time period a connection will use approximately 10% of those exit nodes (under the default settings). If I were a gambling man, I’d wager money that there are at least 100 malicious Tor exit nodes doing passive monitoring. A nation state could double the number of Tor exit nodes for less than the cost of a smart bomb. A nation state can compromise enough ISPs to have monitoring capability over the majority of Tor entrance and exit nodes.
Other solutions are just as fragile, if not more so.
Basically, all I am trying to say is that the surveillance capability of the adversary (if you pick a nationstate for an adversary) exceeds the evasion capability of the existing public tools. And we don’t even know what we should be doing to evade their surveillance.
Practicing effective counterintelligence on the internet is an extremely difficult process and requires planning, evaluating options, capital investment in hardware, and a clear goal in mind. If you just want to “stay anonymous from the NSA”, or whomeever… good luck with that. My advice? Pick different adversaries.
]]>