Hacker OPSEC

STFU is the best policy.

Morris Worm OPSEC Lessons

25th Anniversary of STFU about your computer crimes

Reading this interview with the prosecutor of Robert Morris Jr about the Morris Worm there are a few cool OPSEC lessons we can learn.

How was Morris caught?

One way was with computer forensics. Tracing back the source of the worm. The second way was one of Morris’s friends told The New York Times in response to some articles that John Markoff was writing he inadvertently gave his initials.

There were a couple of ways that he was discovered. The first was the forensic analysis of the worm itself, and tracing that back to the original infection point. This sort of evidence shows where to look (the original infection), but it does not provide enough information to successfully prosecute. It is circumstantial so far, and given some careful sanitisation of the original box, it would be a very hard case to prove.

The far more damaging way that Morris was caught was via an OSINT case officer doing HUMINT collection (a reporter interviewing people about the worm). The journo managed to elicit information about the worm’s author (his initials). This is the sort of extremely damaging information leakage that happens when there is poor OPSEC. There was no anti-interrogation training provided to the members of the Morris cell (i.e. all his friends who knew about the development of the worm).

Deny everything. Admit nothing. Or, you know, not.

he did testify that he wrote the worm. He came in and testified, “I did it, and I’m sorry.” I turned to my co-counsel and asked, “Should I prove he didn’t do it or he’s not sorry?”

When the prosecution has to prove that you committed a felonious act, it is a lot easier for them when you confess on the stand. I can’t second guess the decisions of Morris’ legal counsel, but unless you are instructed to do so by your lawyer: STFU.

The Morris Cell and “Need to know”

We talked to his friends. His friends were witnesses for us. They didn’t have a choice. There was a core group. …one of the meetings where Robert Morris was discussing the worm occurred at a Legal Seafood in Kendall Square… He talked about how it was developed, how it worked, what vulnerabilities it exploited. At one point he was at a meeting back at Harvard, he got so excited that he literally jumped up on a table pacing back and forth on the table explaining how it worked…

The close friends of Robert Morris, the Morris Cell, were fully briefed on all aspects of the worm. Its capabilities, its functionality, and its author’s real identity. None of the other members of the cell were actively exposed to the risks of the operation. They had no “need to know”.

This failure to STFU, to properly compartment the design and development of the worm, was a key factor leading to his capture and prosecution. Fortunately, things worked out well for him, in the long run.

How to evaluate “Need to know”

The rule of thumb is: if someone is actively sharing the risk, they have a need to know. This need to know is, of course, restricted to only those aspects of the operation in which they are actively involved.

OPSEC Isn’t Security Through Obscurity

OPSEC revisited

The goal of OPSEC is to control information about your capabilities and intentions to keep them from being exploited by your adversary.

In typical hacker fashion, the term OPSEC has come to mean more than just information about capabilities and intentions, but also personal information about the yourself.

Kerckoff’s Principle and OPSEC

A common source for the idea that “security through obscurity is bad” is Kerckoffs’ principle which states that: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge. OPSEC as a system of security is sometimes confused with “security through obscurity”. This is not the case. Such thinking reflects a confusion of both the problem with opaque security systems and the foundations of OPSEC.

OPSEC is a System

The way to clear this confusion, I believe, is to point out that OPSEC is a security system, not any one specific practice. The system itself is open source, in that we know how and why the various techniques and practices work. For example, the tradecraft technique of a dead drop is public knowledge. The security of a dead drop is not that no one knows how they work, but rather the adversary does not know where a specific dead drop is locate, nor when that dead drop is being serviced (loaded or unload). That information, primarily the location of that dead drop, is the secret key to the dead drop security system. This information is what must remain secret for the dead drop to remain secure.

Why OPSEC Works

So OPSEC as a system of security does not violate Kerckoff’s principle, and is not “security through obscurity”. The specifics of any one application of OPSEC techniques provide security, but those are analogous to the private key to the system. If they are compromised, then security they provide will be be compromised.

Observations on OPSEC

Briefly, I would like to highlight some important considerations for good OPSEC. Firstly, OPSEC is a mode of operating, not a tool or a collection of tools. Secondly, OPSEC comes at a cost, and a significant part of that cost is efficiency. OPSEC is slow. Finally, maintaining a strong security posture (i.e. “good OPSEC”) for long periods of time is very stressful, even for professionally trained espionage officers.

Learning good OPSEC requires internalizing the behavioural changes required to continually maintain a strong security posture. The operational activities have to become habit, because the small things matter, and every careless mistake can compromise security. The only way to develop good OPSEC habits, good security hygiene, is to practice. Make the foolish beginners mistakes during a practice session, rather than in the field. Two relevant sayings:

  • Amateurs practice until they get it right, professionals practice until they can’t get it wrong
  • The more you sweat in peace, the less you bleed in war

After developing good security hygiene habits, the second most difficult thing about good OPSEC is learning patience. Increased OPSEC security comes at the cost of efficiency, primarily in communication time-frames. The OPSEC mechanisms that must be in place to reduce the risks during communication add latency. As a result, communication takes significantly longer and is less reliable. Obviously, this is more of an issue with time sensitive operations than those that have more generous deadlines.

The single greatest security risk is communication between operatives. Clandestine agencies, such as the CIA, MI6, DGSE, etc. will work incredibly hard to minimize the risks surrounding communication with their recruited agents. In the simplest form, this involves a 2-4 hour “surveillance detection route” (SDR) to see if they are “in the black” before they perform any operational activity. This is on top of the hours of planning for the operation itself (note: these are minimums, operations requiring high security might take weeks or months of planning, and 12 hour SDRs).

The technology that exists to facilitate information security, e.g. encryption, is important, but it is not sufficient or even the starting point for robust OPSEC. By all means, learn to use encryption software correctly and in a properly secure fashion. However, it is more important to compartment sensitive activities and structure your operational environment for impact containment than install use particular software.

Silk Road Security

Counterintelligence Lessons for Drug Dealers

NOTE Events have overtaken my slow writing speed. This post was in the works before the Silk Road bust in September 2013. I’m uploading it anyway because it has some useful information, however there seems little point in finish it now.

The dealers on Silk Road ship a large amount of illegal products around the world, and it is clear that they’re successful at it. However, the US Postal service has been aware that drug dealers user their service for shipping illegal substances and has developed guidelines for determining suspect packages efficiently. Unfortunately for them, those guidelines have leaked and this allows someone abusing the US mail as an illicit distribution channel to evade the USP’s checks.

Suspicious Post Guidelines

The actual guidelines for suspicious packages list a number of major indicators that the inspectors look for. This guide is somewhat outdated, and a revised version has also be leaked. In both cases, the triggers and the reasoning behind them are similar.

FBI Profiling Criteria

  1. Heavy taping along the seams;
  2. poor preparation for mailing;
  3. uneven weight distribution;
  4. apparent package reuse; and
  5. labels that are handwritten, contain misspellings, originate from a drug-source State, indicate person-to-person not business-to-individual mail, have a return zip code that does not match the accepting post office zip code, a fictitious return address, names of senders or recipients with features in common (John Smith, e.g.) and having no connection to either address

Drug Mail Profile

  1. the use of Express Mail,
    • Express Mail is primarily used by businesses for document delivery.
  2. the weight of the package,
    • drug traffickers are mailing approximately one kilo of cocaine per package, plus some dummy weights
  3. the package is sent from Puerto Rico, a known drug source location,
  4. the package was mailed from a post office address outside of the zip code on the return address,
  5. an Accurint check reveals that no one by the sender’s name lived at the return address,
    • Police will also check Google and Facebook to get more information
  6. the package is heavily taped at all seams
    • heavy taping may or may not help evade detection by drug sniffing dogs–but we know one thing for sure–it certainly helps draw police attention to the package!
  7. the label is handwritten
    • Since most Express Mail is business-to-business, or business-to-client, labels that aren’t typed are suspect

Collated

Anything that looks like someone is sending slightly over an even metric weight of something, from a known suspect location, to another person, in an old heavily taped package with a fake return address. Sounds like bad tradecraft.

  1. Suspicious packaging
    • Heavy taping
    • Package reuse
  2. Not business mail
    • No printed label
    • Clearly not documents
    • Individual to individual
  3. Known suspicious origin
    • Occasionally specific post offices
    • Specific countries (Puerto Rico)
  4. Flimsy “return address” cover
    • Fake name
    • Mismatched name + address
    • Mismatched address + zipcode

Main points to take away:

  • drug packages appear different from normal mail
  • many factors are contribute to creating a plausible cover/alias.
  • The packaging of a drug shipment provides a cover, which needs to be backstopped.

Don’t make shit up, do your research and steal an identity with a real address.

Backstop your cover

When creating a cover, make sure it is as fully fleshed out as possible. This means developing supporting evidence to bolster the validity of the cover. In intelligence lingo, this is called backstopping.

A backstopped cover is one where checks to verify the authenticity of the cover story are verifiable. For example, if the cover story includes a name, there are matching identity documents; if there is an phone number, it connects to someone who will substantiate the cover story; if there is an address, it exists. The old Soviet illegals used to spend years developing their cover and backstopping them. They’d live for a few years in a country they claimed to be immigrating from, so they would have the memories, experience and verifiable evidence that they were from there.

If you are going to use a cover (you probably should), then put in the effort to create a backstop. The complexity and depth of that backstop are dependant on how deeply the cover will be investigated. Remember though, it is better to have too much, than not enough…

It Was DPR, in the Tor HS, With the BTC

Give it to me straight, dr the grugq

Generally, it appears that Ross Ulbricht was applying his economic and techno-libertarian philosophy to real life. As his project grew, his security posture improved – too late. The most serious mistakes that Ross Ulbricht made were made during the period Jan 2011 - Oct 2011. A full timeline of the events in the Complaint is available on my tumblr.

NOTE: This is an abridged version of a longer post pulling out the lessons learned from the Silk Road Complaint of 27th September 2013. This post will only list the OPSEC errors, rather than explore them in detail.

The OPSEC Failures

The fundamental error is poor compartmentation. Ross Ulbricht, the real person and the online persona (Google+, LinkedIn, etc), and the Dread Pirate Roberts persona share ideological views and geographic locations. There is contamination between the two personas. Most of these seem to be due to the organic evolution of the Silk Road venture, where early naive Ulbricht makes mistakes that later smarter DPR wouldn’t. Unfortunately, the later DPR is more ideologically extreme and consequently less savvy about mainstream society.

  1. Poor Compartmentation
  2. Profiling
  3. Geographic Location
  4. Isolation

Poor Compartmentation

  • Contamination: seriously fatal links created between personas
    • Silk Road + altoid: Shroomery, BitcoinTalk forums
    • altoid + rossulbricht@gmail.com: BitcoinTalk
    • Ross Ulbricht + frosty@frosty[.com]: StackOverflow
    • frosty@frosty + Silk Road: Silk Road server admin SSH key

The compartmentation failures are somewhat pervasive, in particular the ideological “Austrian School of Economics” and the mises.org site. However two particular contamination errors stand out:

  1. Silk Road –> altoid –> rossulbricht@gmail.com link in 2011
  2. Ross Ulbricht –> frosty@frosty.com –> Silk Road server link in 2013

The first of these failures happened because the altoid persona used to promoted Silk Road was poorly fleshed out (e.g. no email address). Ross did not put the plumbing in place to backstop his altoid cover. He then joined the BitcoinTalk community using this contaminated cover. His participation and search for social validation left him with his guard down. Consequently, he revealed a great deal of profiling information about his project and beliefs. Many of his posts are about Silk Road infrastructure or his mises.org influenced economic theories. After participating for 10 months he finally made the fatal OPSEC error of posting his personal email address.

The second error was poor compartmentation of his online Ross Ulbricht persona, the tech savvy San Francisco based startup guy, and “frosty” the system admin of the server hosting the Silk Road site. His poor compartmentation, likely using the same computer for both personal and business use, and his limited backstopping of the DPR/altoid/frosty persona meant that any error would be fatal.

These two errors combine to link Silk Road with Ross Ulbricht, and Ross Ulbricht with Silk Road.

“I’ll take Profiles for $300, Alex” : “Too much in common” : “What do Ulbricht and DPR share?”

  • Profiling: Ross Ulbricht talks and acts like Dread Pirate Roberts
    • LinkedIn profile
    • Timezone leakage: private messages, forum posting times
    • BitcoinTalk altoid posts about: economics (mises.org), security, programming
    • Silk Road Forum Dread Pirate Roberts -> Mises + “Austrian School of Economics”
    • Mises.org Ross Ulbricht account

Ross Ulbricht, the person, was an active participant in the mises.org website and the BitcoinTalk forums. In both cases he was deeply committed to the “Austrian School of Economics”, something the Dread Pirate Roberts was also a huge fan of. The altoid cover alias, linked directly to Ross Ulbricht, frequently talked about bitcoin security and PHP programming. He is, based on his posts, clearly invovled in running some sort of PHP based bitcoin using venture that requires high security. Sort of like the Silk Road site.

  • Geographic Location
    • Silk Road web server administered over VPN from a server
    • VPN server IP stored in the Silk Road PHP source code
    • VPN server accessed from a location 15240 cm (500 ft) from a location that accessed the Ross Ulbricht GMail account.

The location of the Dread Pirate Roberts was something of an open secret. It is clear that he was based in the west coast of the US. Ulbricht was located in San Francisco at the same time as DPR, as proved by his large online footprint: Google+, YouTube, GMail.

Isolation is bad, mmmkay

  • Isolation without relief
    • Rented room under assumed name
    • No “mainstream” social circle to realign with social mores
    • No peers to talk to, only Silk Road forum members and admins

After the altoid persona is retired from BitcoinTalk, Ulbricht migrates his social interaction to a more extreme community: the Silk Road forums. This appears to have been his “scene”, where he interacted with people and cultivated friends (including an impressive array of undercover law enforcement officials).

The underground life forced on Ulbricht as the Dread Pirate Roberts led to the major problem of isolation. Human beings are social animals. We require social interaction to maintain a healthy mental state. The strict security of DPR required isolation, leaving Ross Ulbricht living his social life on forums with niche ideological views, initially BitcointTalk (in 2011) and then the Silk Road forums. Isolation from mainstream society is known to lead to ideological extremism as members of the niche community self-reinforce their ideological tendencies. Consequently, they are less able to understand mainstream society’s ideas, beliefs and morals. This is dangerous. This isolation leads him to rationalize hiring online hitmen to preserve the Silk Road community is morally acceptable.

Apparently the only source of social validation and ego gratification that Ross had was a group of bitcoin libertarians, drug seekers, drug dealers and undercover cops. This is not a healthy social environment conducive to a balanced state of mental health.

What have we learned?

So, the Dread Pirate Roberts Complaint basically tells us nothing that we didn’t already know about OPSEC. There are some lessons learned which can be used to harden OPSEC practices going forward. The main things are still: strong compartmentation; use Tor all the time; avoid leaking profiling information, and it is prudent to regularly migrate to new cover personas.

Drug Delivery Service OPSEC

Some interesting lessons on how a modern New York City drug delivery service uses basic tradecraft to create a reasonable security posture.

The Source

This Vice article provides the source of the information for this blog post. Using some basic background knowledge on how covert groups operate, it is simple to parse and analyze the drug delivery service tradecraft.

Recruitment

a friend of mine solicited hardcore drugs for a Manhattan drug kingpin, who was looking for a new pot delivery guy. My friend encouraged me to try out for the job.

As with many covert groups, the recruitment process relied on personal connections. This social network grounded approach to expanding a covert organisation is generally good for initial security. The recruits are unlikely to be agents sent to infiltrate the organistation as the long standing social ties between members and recruits both establishes trust and serves as vetting.

Developing a covert organisation based on social network ties provides a means of rapid expansion and easy security clearance. The downside is that once a single member of the organisation is compromised, the adversarial security forces can easily roll up the whole network. The poor compartmentation of a social network based covert organisation is its Achilles heel. The security of the organisation is critically dependent on the security of each individual member.

ProTip: Expand your covert network with individuals who are passionate about your ideological beliefs. Ensure strong compartmentation, starting with recruitment.

Leverage

He asked me to provide documentation of my current address and phone number as an insurance policy. If I ran out on him, he warned me he’d hold my friends responsible for the deficit funds and/or drugs.

The principal of the organisation “Nathan” requires that the recruit provide a verifiable address and means of contact, along with dire warnings of consequences in the case of infractions. This is very basic control principles, typical of covert organisations.

The major security problem with this approach, of course, is that the records maintained by the network’s principal are a high value target for the adversary. Compromise of the principal’s records will lead to total collapse of the network, and interdiction for every member involved. There is no chance of evasion.

ProTip: No logs, no crime. Do not keep records of the members of your covert organisation. These records are extremely sensitive.

Operational Actions

the transaction and exit should be as swift as possible. “You aren’t here to hang out,” she said. “It’s not a social call, and they aren’t your friends. You want to walk in and be friendly and make conversation but also get to the business at hand and get out of there quickly.”

The illicit operation, the drug sale, is intended to be rapid and minimize the period of vulnerability for both parties. Interestingly, this is possibly a poor choice if the threat is surveillance. There are few reasons a random individual would enter a domicile for a short duration. Also of note, the covert organisation provides no reasonable cover story for why the agent (the drug courier) is entering the residence of the client. A simple “what were you doing?” type question would likely completely blow the whole operation.

ProTip: Minimising the period of vulnerability improves the chances of operational success. Always make sure your agents are capable of delivering plausible cover stories. Cover for action

Cover for Status

Nathan forced me to wear a button-up shirt and slacks, shave my face, and keep my hair conservatively short. He believed this uniform would attract little attention as I walked around with thousands of dollars worth of pot in a laptop case slung over my shoulder.

The covert organisation has, surprisingly enough, chosen to enforce a uniform that makes their agents blend in with the mainstream. This is completely inline with the typical operational disguises employed by covert organistations operating in controlled territory the world over. (See: Moscow Rules go with the flow; Murphy’s Laws of War: don't stand out, it draws fire)

ProTip: They got this one exactly right.

All phones are bugged

Although I used my flip-phone constantly at work, I was never given clients’ addresses over the phone. Clients calls would go to a dispatcher—a third party who took the call, traced the number through a database of numbers, and then returned the call from a different phone to confirm their request for drugs. After their request was confirmed, I received a call from another phone. The dispatcher only told me, “You got Nick,” or “You got Lucy.” I was banned from responding with anything besides a murmured “OK.”

Each operational use of the phone provides the adversary with minimal value. There is a unique identifier for the client (e.g. “Lucy”), and the agent acknowledges receipt of the directive (“OK”). The dispatchers interaction with the client is itself run over multiple phone lines and kept to short, simple, normal statements.

ProTip: This is very much inline with all covert organisations’ guidelines for using phones. Never use keywords, keep the content as vague as possible, minimize the period of vulnerability – get off the phone!

OPSEC FAIL: attracting attention

Each day I was given a stipend of $40 for cabs. No one knew if I didn’t spend the $40. Instead of taking cabs, I ran around in a frantic state that negated every other measure I took to not draw unwanted attention

This is an instance of preference divergence, a common problem for covert organisations. The financial resources provided to the agent of the principal are siphoned off and directed towards non-operational uses (the drug courier skims and pockets his cab stipend.) There doesn’t appear to be any consequence to this operational security failure, however it jeopardizes the entire organisation. If “Nathan” were a more disciplined principal he would monitor his agents more closely and ensure they are conforming to the organisational security requirements. Strangely, drug dealers are not strict disciplinarians.

ProTip: if the securit of the entire organisation is dependent on the security of each individual agent – enforce the operational security requirements strictly!

Aliases

I shook his hand and said, “I’m Jack.” He gave me a knowing grin. “So that’s the name you’re using?” he asked.

The agent is using an alias to provide pseudonymity from malicious clients. This provides some minimal level of security. It is definitely better than not having any cover at all. However, as noted above, it should be combined with a robust cover story for why the agent is visiting a residential home for a brief period.

Discharging the agent

After a promotion, the drug courier decides to find a new line of work. If the organisation was stricter in their OPSEC practices, the departure of an agent wouldn’t place anyone else in jeopardy. As it stands, it seems clear that the agent who is now drawing attention to himself by writing about his experience in a national magazine(!) still retains sufficiently sensitive information to unravel the network.

ProTip: compartment early, compartment often. It is safer than any alternative.

TL;DR

Compartment your covert organisation from recruitment through to operational action so that when your agents leave or are compromised they are unable to compromise the organisation. Ensure that your operational activities have good cover for status (e.g. a disguise) and cover for action (e.g. a strong cover story). Strong compartmentation, strong cover, and be aware of the risks of using social networks for building a covert organisation.

Thru a PORTAL Darkly

The Design and Implementation of P.O.R.T.A.L

The Personal Onion Router To Assure Liberty is designed to protect the user by isolating their computer behind a router that forces all traffic over the Tor network.

PORTAL Gooooooooooooooaaaaaaaaaaals!!!!!!

The goal of the PORTAL project is to create a compartmented network segment that can only send data to the Tor network. To accomplish this the PORTAL device itself is physically isolated and locked down to prevent malicious tampering originating from the protected network. So if the user’s computer is compromised by malware, the malware is unable to modify the Tor software or configuration, nor can it directly access the Internet (completely preventing IP address leakage). Additionally, the PORTAL is configured to fail close – if the connection to Tor drops, the user loses their Internet access. Finally, the PORTAL is “idiot proof”, simply turn it on and it works.

The Implementation, the Pain, the Horror

The initial requirement was to develop PORTAL for a small personal sized router, such as the TP-Link 703N, 3040, or M1U. All of these devices are small, portable and support the OpenWRT open source router firmware. Unfortunately, it turns out that “small” and “portable” is synonymous with “weak” and “underpowered”.

Unfortunately, Tor is quite resource intensive for an embedded device. Tor uses 16MB of RAM and for complete functionality (requiring the GeoIP database) it occupies slightly over 1.2MB of squashfs space. The stock TP-LINK routers have only 4MB of flash and 16MB of RAM (later models have increased RAM). This caused a lot of problems when building early versions. A bare bones OpenWRT system stripped down to just support an Internet uplink USB device occupies 3.2MB of squashfs space. Using the power of math we see: 3.2 + 1.2 > 4.0. Fuck.

Enter The Dragon, or Chinese Hackers to the Rescue

Fortunately, the TP-LINK routers are not just small, they are also extremely hackable. They are very popular with hackers who have modified the hardware and expanded the capabilities of the stock device. I got in contact with a Chinese hacker who has upgraded the TP-LINK 703N to 16MB of flash and 64MB of RAM. Sweet. Using these modified routers development of the PORTAL became much much easier.

PORTAL System Architecture

The PORTAL requires a minimum of two network interfaces: one for the Internet uplink, and one for the isolated network segment. In order to protect the PORTAL from tampering from malware (or malicious users), it also requires a third administration interface. This can be either a serial console, or physical connection. The reason not to use WiFi for the administration network is that that would expose the administration interface to anyone within WiFi range, including potentially the user’s compromised laptop’s WiFi card.

Three Interfaces to Rule Them All

The requirement to protect the PORTAL from a malicious user caused some problems since the device hardware has very limited interfaces. The TP-LINK 703N has only:

* 1 x USB 2.0
* 1 x 100MB ethernet
* 1 x onboard wifi

All available interfaces are required to get us to the three networks we need:

* Tor: isolated proxy interface
    * Tor SOCKS proxy
    * Tor Transparent TCP proxy
    * Tor Transparent DNS proxy
    * DHCP (optional)
* Admin: configuration management interface
    * ssh
    * https (optional)
    * DHCP (optional)
* Internet: uplink connection interface
    * No services

Operational PORTAL

After the user has configured the Internet, and whatever other adjustments they wish to make, they shouldn’t need to connect to the Admin interface again. This leaves us with a very hard target for any attacker who wishes to unmask us (modulo any issues with Tor itself).

The PORTAL has been hardened to make it significantly more difficult for the user to make a mistake, or for an attacker to subvert the Tor protections. From the Tor network the only exposed ports are Tor’s DNS proxy, TCP proxy, and SOCKS. Optionally, you can use DHCP on this network.

If, somehow, the firewall doesn’t work properly, you’re still safe because the PORTAL doesn’t actually route packets. The only way you can reach the Internet (regardless of which interface you’re connected to) is via Tor. This stops stupid mistakes, such as connecting to the Admin interface and forgetting to swap to the Tor network. Don’t worry, you can’t do that, it won’t work, you’re welcome.

Final hardening is left up to the user who will have to assign the Admin and Tor networks to physical interfaces. There are security trade offs either way.

  • Medium Security:

    • Tor = WiFi
    • Admin = Ethernet
    • pros: ease of use
    • cons: pre-Tor plaintext will be broadcast over the AEther (see: Hammond)
  • Maximum Security:

    • Tor = Ethernet
    • Admin = WiFi
    • pros: ultra secure
    • cons: if an attacker cracks your WPA2 PSK, they’ll have access to your management sshd. Of course, they’ll be so physically close to you at that point, leaking your IP is the least of your worries.
    • NOTE: remove the WiFi card from your computer to block access via malware compromise

Just Do It

The PORTAL project has been migrated to the RaspberryPi, which has more power to support Tor. It requires more configuration, which is something I’ll work on, however the ease of acquisition of the RPi makes this the current platform of choice. So go install PORTAL of Pi and compartment all of your sensitive operational activities inside an isolated Tor network.

You Can’t Get There From Here

There have been some responses to my post about the limitations of public countersurveillance tools. Most of them have focused on my statements about the limitations of the Tor network. I started to write a comment addressing one of the more coherent replies but then decided to simply post it here instead.

Rebuttal

The responses all wandered slightly off topic from what my post was about. The point was that simply installing and running off the shelf counter-surveillance software is not sufficient against a nation state level adversary. Saying “Install Tor” or “Install I2P” is not the correct way to develop a counterintelligence program. It is not even the correct place to start. While those tools may be components of a CI program, but they are not sufficient in and of themselves.

To expand on what I was getting at in the post, the core issue is that when Tor and I2P and other countersurveillance solutions are developed, they are developed with certain assumptions about the capabilities of the adversary. For example, Tor does not work against an adversary who has total information awareness about the traffic on the Internet. The assumption for Tor is “adversary can monitor a subset of all IP traffic”, where subset usually equals “a single country”. Because we, the public, do not know the real capabilities of the adversary, those assumptions might be (and in some cases, likely are) completely incorrect. In this example, it is widely suspected that the US has the capability to monitor a significant portion of global IP traffic, not just limited to a single country. At a minimum we can assume that they will be able to get traffic logs for 5 eyes members, and most likely for all of NATO.

My article makes the claim that these off the shelf countersurveillance networks are insufficiently secure against nation state level adversaries. I also claim that we don’t know the capabilities of those adversaries, and therefore cannot know what technology would evade their surveillance capabilities. I stand by both claims.

My point regarding the cost of doubling the count of Tor exit nodes is simply that the financial cost of compromising the Tor network is not even a rounding error in a nation state budget. It is the equivalent of a portion of the change found in the couch. Further more, Tor is not new. It isn’t as if nation state level adversaries just woke up last week, “holy shit, this Tor thing! we better get on that!”. It is conceivable that a nation state has been setting up cover organisations, using agents, and compromising existing hosts for years with the sole goal of subverting the security of the Tor system. We have no way of knowing this because we have limited/no knowledge of their capabilities. Which was exactly my point.

Evil Exit Nodes Unmasked me, and all I got was this lousy jail term

To address the specific objections about “all smart Tor users know to encrypt traffic to combat malicious exit nodes”: yes malicious snooping nodes can be evaded provided you are using encryption to another termination point. This is why I’ve recommended using a VPN over Tor to mitigate against the monitoring that is done by evil exit nodes. However, an additional problem with a malicious exit node is simple traffic analysis, where the content of the data is irrelevant, but unmasking the end user is still possible. There are cases where unmasking an end user is sufficient, if they are going to “www.how-do-I-wage-jihad-in-the-usa.com.ir”, for example. If we take the case of a nation state level adversary who can monitor all IP traffic within their country, and we combine that with the same adversary operating (or monitoring) a significant percentage of exit nodes, then that adversary can trivially unmask Tor users. The cost of this operation would be well within the budget of any respectable intelligence agency.

Backlash caused severe pain in my lower nonspecific

Regarding risk of backlash if it is known that a nation state has compromised all (or many) ISPs: Firstly, we can all agree that the compromise of an ISP is well within the scope of an intelligence agency. If you have been around the underground long enough, you know how many different people and groups have compromised Tier 1 ISPs. But regarding the “backlash”, a nation state adversary will classify everything that could leak their tools, techniques and procedures. The means by which they collect information is usually as classified, or even more classified, than the information they collect. It is not likely that they would ever willingly allow this information to become known. Frequently intelligence agencies will classify information simply because revealing that they know it would reveal their collection capability, and thus compromise their ability to exploit that capability in the future.

Which is what brings me back to the point I was getting at in the post. If you are engaged in activities which will put you up against a nation state level adversary, you have no knowledge of what their capabilities are. Fortunately for just about everyone (reading this), you do not have a nation state level adversary. A law enforcement agency, such as the FBI, will have access to some nation state level capabilities in certain circumstances. For example, if it was known that a trained al Quaida cell was operating in the continental US and using Tor for their communications platform, the NSA would very likely use whatever Tor unmasking capability they have to assist the FBI. They would do this in a blackbox fashion: get a request -> send a response. They would not reveal how they performed the unmasking because the FBI would not have people who are cleared for that information. (This is compartmentation in action.)

As a thought experiment, imagine that Osama bin Laden was still alive and that he used the Tor network to do a Reddit AMA once a month. How long do you imagine it would take for the US to find and neutralize him? I posted this question on Twitter and, while responses varied, ex-NSA Global Network Exploitation Analyst Charlie Miller guessed one to two months. I would be very surprised if it took more than three. This is because OBL had a nation state level adversary. You (probably) do not.

Good news everyone, nobody gives a fuck

There is good news, of course. Nation state level adversaries are concerned about nation state actors (and some non-nation state actors). They really don’t have the resources to spend monitoring law enforcement issues. Unless you are a policy maker, a ranking military official, an intelligence officer/agent, a member of a known terrorist organisation, or have somehow otherwise ended up on a targeting list, the Intelligence Community (IC) really doesn’t give a fuck about you. The product they produce for their clients - security cleared government officials - is documentation and analysis that helps these officials make informed policy decisions (or at least, that is the intention).

You Should OPSEC anyway

Now, as I advocate elsewhere, it is best to start your counterintelligence program early, because after you are targeted it is (usually) too late.

My central recommendation on how to operate safely, whether you are a hacker, a spy, a whistleblower, or whatever, is to implement compartmentation first. Classify the data which is sensitive (e.g. your real identity and anything linked to your real identity) and segregate it from everything related to your illicit activity. Preferably, by physically separating onto different machines. When conducting the illicit activity, use your illicit activity equipment, and do it over an internet link that cannot be linked to you. By all means, use Tor, or I2P, or a VPN, or whatever. But that technology must not be your primary and only line of defence.

This is how you do good CI. Develop a SOP that will protect your sensitive data even when things fail. That said, most of what will sink people is poor OPSEC, not poor SIGSEC. The more people that know about your illicit activity the higher the chance that Murphy will raise his head and it’ll all end in tears.

Counterintelligence Cliff Notes

So, to reiterate, choosing a technology first and then relying on it for security is completely ass backwards. To do things properly, operate in this order. Figure out what you are trying to protect (and from whom), separate it from everything else, and then select tools, techniques and procedures that will enable you to protect it.

Ignorance Is Strength

Seven, this rule is so underrated
Keep your family and business completely separated

Biggie Smalls Counterintelligence Theory and Practice for Crack Dealers

Guerrillas, Terrorists, Narcos, Spooks, and You

Guerrillas, terrorists, narcos and spooks the world over have learned the hard way how to keep their illicit activity safe from their opponents. The same principles of counterintelligence (CI) that help protect them from death can be applied to protect you from your adversary. If you engage in behavior that carries the risk of negative consequences from an adversary, you will need to develop and implement a robust CI program. This post will explain the foundations of strong OPSEC, a critical part of just such a program.

Establish Cells, or Live in One

The cornerstone of any solid counterintelligence program is compartmentation. Compartmentation is the separation of information, including people and activities, into discreet cells. These cells must have no interaction, access, or knowledge of each other. Enforcing ignorance between different cells prevents any one compartment from containing too much sensitive information. If any single cell is compromised, such as by an informant, the limitats of the damage will be at the boundaries of the cell.

Now, compartmenting an entire organisation is a difficult feat, and can seriously impede the ability of the organisation to learn and adapt to changing circumstance. However, these are are not concerns that we need to address for an individual who is compartmenting their personal life from their illicit activity.

Spooks, such as CIA case officiers, or KGB illegals, compartment their illicit activity (spying) from their “regular” lives. The first part of this is, of course, keeping their mouths shut about their illicit activities! There are many other important parts of tradecraft which are beyond the scope of this post. But remember, when you are compartmenting your life, the first rule is to never discuss your illicit activities with anyone outside of that compartment.

Compartmentation For Dummies

This will cover a basic set of guidelines for compartmenting a particular online activity. In our hypothetical scenario there are two people, Alice and Bob (natch), who want to exchange information with each other. They are deathly afraid that the adversary will learn (in ascending order of risk to Alice):

  • Two people have been in contact (low risk)
  • Bob has been in contact with someone (medium risk)
  • Alice has been in contact with someone (high risk)
  • Alice has been in contact with Bob (extreme risk)

While this guideline is a starting point for someone who seeks to conduct illicit activity under hostile internet surveillance it is not concrete set of rules. When developing a CI program you must evaluate the threats and risks to yourself and create a custom set of tools and procedures that address your needs. The specific SOP that you develop for will differ from the outline below, but if it is to be resilient against the adversary it must be based on some form of compartmentation.

Step 1: Cleanliness is Next to Not-Being-in-Jailiness.

Alice must purchase new dedicated equipment used exclusively for communicating with Bob. This means, buy a new laptop. Don’t bother with a new virtual machine, that isn’t sufficiently compartmented. Any existing equipment that Alice owns might already be compromised and is therefore not safe against potential monitoring.

The software installed should be the bare minimum of generic utilities required to do the communications. Here is an example setup:

  • Laptop (cover the webcam with tape, disable the mic if possible)
  • Virtualization Software (VBox, VMware, Parallels, etc)
  • Ubuntu installed in the VM (disable all the logging + reporting)
  • Recommended Software:
    • Tor Browser bundle
    • PGP (generate and store new keys on a USB drive)
    • OTR enabled chat client
  • Snapshot the VM

This is the base platform that Alice will use when contacting Bob. Obviously, Bob should go through the same process (if he faces similar risks, or is concerned about Alice’s wellbeing).

The usernames and hostnames used should be generic, not associated with Alice’s real name, location, place of work, etc. If the VM is compromised, there will be no identifying information, or keys that can be used to decrypt previous comms. If the VM is escaped and the adversary has access to the host, again, there will be no identifying information. The host machine has only the virtualization software on it. Use full disk encryption on the host machine, probably on the VM, use different passwords between the two, and keep the machine fully powered off when not in immediate active use.

Step 2: Take a Trip

Number 5: never sell no crack where you rest at
I don’t care if they want a ounce, tell ‘em “bounce!”

Biggie Smalls Counterintelligence Theory and Practice for Crack Dealers

Alice must ensure that every single time she contacts Bob, or checks for contact from Bob, she is in a location which is not linked to her. Additionally, she must use an internet connection which is not linked to her, for example a public WiFi or a prepaid 3G card.

When Alice goes to contact Bob, she must ensure that she does not carry any device which will transmit her physical location. For example, her mobile phone(s). Leave it at home.

Step 3: UnlinkedIn

After Alice has used her dedicated machine to communicate with Bob, she should revert the VM snapshot to the pristine state from right after she installed. This should limit the ability of the adversary to persist after a compromise (provided they didn’t escape the VM).

The converse-with-Bob machine must be used with new accounts created specifically for, and exclusively to, converse with Bob. These accounts must be created from the new machine, and never be used for anything else except Bob related activity. Alice must create new accounts that don’t have any links to her real identity. For email, one option is a TorMail account. For instant messaging there is either Cryptocat over Tor, or create a new Jabber account such as with jabber.ccc.de.

Concluding Thoughts

The core concept to take away here is: separate identity, with equipment and accounts, used only for one activity. The essense of compartmentation is separation without contamination. My strong recommendation is to use: a virgin machine, with virgin accounts, to contact the target. This machine is used exclusively for this one activity: it is compartmented. Associating the activity of that online entity, even with full and complete global internet monitoring (and 0day attacks) with a specific individual should be difficult. [NOTE: don’t count on this if you happen to be the new al Quaida #3].

Good Luck With That

Story time

Back in the day we used to have AOL for internet access. If you’ve never suffered AOL, then you probably don’t know that it would disconnect you if the service didn’t detect any traffic for some period of time. It popped up an alert that said something like “no activity detected for 30 minutes. If there is no activity in the next 10 minutes, you will be disconnected”. When this dialog popped up my father would try to stay connected by moving the mouse around a bit. Obviously, this was completely ineffective.

The problem was his understanding of the problem was completely wrong. His mental model of how the whole system worked was so flawed, he was unable to identify the steps he had to take to actually solve his problem.

I lolled

When I read articles and blog posts on “how to avoid surveillance”, or “how to stay anonymous online”, I am reminded of my father waving his mouse around to appease the dialog box, never understanding how completely wrong he was.

The publicly available tools for making yourself anonymous and free from surveillance are woefully ineffective when faced with a nationstate adversary. We don’t even know how flawed our mental model is, let alone what our counter-surveillance actions actually achieve. As an example, the Tor network has only 3000 nodes, of which 1000 are exit nodes. Over a 24hr time period a connection will use approximately 10% of those exit nodes (under the default settings). If I were a gambling man, I’d wager money that there are at least 100 malicious Tor exit nodes doing passive monitoring. A nation state could double the number of Tor exit nodes for less than the cost of a smart bomb. A nation state can compromise enough ISPs to have monitoring capability over the majority of Tor entrance and exit nodes.

Other solutions are just as fragile, if not more so.

Basically, all I am trying to say is that the surveillance capability of the adversary (if you pick a nationstate for an adversary) exceeds the evasion capability of the existing public tools. And we don’t even know what we should be doing to evade their surveillance.

Concluding remarks

Practicing effective counterintelligence on the internet is an extremely difficult process and requires planning, evaluating options, capital investment in hardware, and a clear goal in mind. If you just want to “stay anonymous from the NSA”, or whomeever… good luck with that. My advice? Pick different adversaries.