Cyber Command’s General Lord Agrees to Interview on Slashdot

As a quick follow up to my Blogs of War post I thought I’d post this link to Slashdot. The funny thing is that based on the recent actions of Gen Lord’s very own AFNOC, neither the Slashdot thread or this blog post will be visible from a USAF computer. But…

Apparently General Lord, the provisional Cyber Command commander (nice alliteration there huh?) has agreed to take questions from Slashdot readers. I think this is pretty cool and along with other recent comments about culture make me think Gen Lord is really making an effort to attract talent to his nascent command.

Some of the comments showing up on the slashdot thread are pretty good and some are just plain funny. Worth scrolling through.

• • •

The Blogs of War

The USAF has recently clamped down on access to blogs on internal networks as reported at Wired and elsewhere. There is no point in my joining in the general outcry. The AF is already getting an earful. I guess I will say that I’m disappointed that my readership is going to plummet from its recent peak of about ten to more like two as it has already been confirmed that it is blocked at Electronic Systems Command. The two of you still here from Army CECOM stick around, I’ll try to switch back to SOSCOE soon…

By emphasizing only “official news sources” the Air Force will clearly be limiting its access to all manner of valuable information such as Michael Yon’s inside look at what’s going on in Iraq. But maybe more specifically, I can’t imagine why Cyber Command would want to limit access to blogs like The Dark Visitor and Tao Security. The long tail IS where it’s at in this space. You can’t rely on CNN to learn about Chinese hackers and network security monitoring trends.

I spent a decade in the Navy and I can intuitively understand how a hierarchical military culture will respond to external stimuli such as the web. Control it, contain it, deny it and certainly don’t let anyone even appear to be having fun with it! (Fun will only be authorized between the hours of 1900 and 2100 in designated areas with approved fun augmentation devices such as a volleyball – unless of course the command is sponsoring mandatory fun at the Morale Welfare and Recreation Club in which case fun will be required between 1500 and 1700 and may involve egg tossing).

When I left the service and began the arduous process of de-institutionalization I learned some important lessons from one of my first bosses, Jay Brown. Chowderhead (a colleague of mine, also ex-Navy, and with a somewhat unconventional and unfortunate call sign that stuck with him) and I would be trying to figure out how to better control some situation or another and Jay would just sit back and say nothing. Jay simply excelled at letting stuff play out a bit, letting it percolate, and eventually evolve into a good solution. He encouraged us to do the same thing more often than it seemed to make sense. For a couple of ex Naval Officers with a strong “J” preference at the end of our Myers-Briggs, this “letting things happen” could be absolutely maddening. But, over time I really came to appreciate what he was teaching us. How (and when) to let things go and trust in the evolving wisdom of the people that worked for us.

As warfare becomes more networked and traditional hierarchical command and control becomes less applicable to at least some areas of the modern fight, I think the lesson that Jay taught Chowderhead and I is becoming more and more relevant to a whole generation of military leaders. It boils down to expectations. If you expect your people to frivolously waste time and that only by you actively controlling them will they accomplish anything useful, you will process their visiting blogs in one way. If you expect them to be aggressively learning, adapting and sharing information and trust them to do it, you will see their use of blogs in a completely different light. Some of them will violate that trust and goof off, but so what? Everyone else will be so much more productive and engaged.

So, with that as background I’m going to briefly take on the orthodoxy a bit. Out here in blog land it is easy for us to know better and say it – “Let them read blogs!” Back seat drivers, Monday morning quarterbacks, … we are like them in that we can take a position without owning the risk. Though I suspect that at least part of this blog shutdown is driven by the kind of reactions that I describe above, I doubt the Air Force is saying everything that factors into the decision. I’m betting that the AF is also seeing trends like this (from Sinan Eren’s recent Blackhat presentation)…

Https___wwwblackhatcom_presentation

…and reacting to avoid widespread deployment of new classes of botnets on NIPRNet.

• • •

Getting Owned Across the Air Gap

Usbdrive

I attended a fascinating talk yesterday at Blackhat given by Sinan Eren from Immunity in which he described a recent for-hire Information Operation.

In the talk he took pains to differentiate between a standard penetration test and the kinds of things they were doing; the primary differences being time scale and scope. In this case the time scale was long (though undisclosed) and the goal was compromise of some particularly sensitive data. He didn’t say but it was probably product design or source code.

To maintain a stealthy ingress they decided to avoid easily exploited client side weaknesses and instead found something much more difficult to detect, a poorly implemented anti virus scanner on the mail transfer agent. After fingerprinting, building an equivalent MTA in their lab, and coding a unique one-time exploit of the poorly implemented AV file parser, they were in. Consolidation and expansion was done at a leisurely pace, greatly aided by the social engineering benefits of the MTA’s access to all of the email traffic. Within a reasonable period of time they were able to relationship map many of the target’s personnel, expand to the other side of the firewall, quietly exploit a number of client machines, and gain a good understanding of who was likely to have access to the information they were looking for.

Then interesting stuff happened.

They began to find file references to the stuff they were looking for on a user workstation. The references ultimately ended up pointing to a USB drive that had been accessed at some time prior. It turns out that the target company was running a separate air-gapped internal network where they segregated development or other sensitive activities. However, one of the developers had questions that needed answering over email and was using the USB drive to carry bits and pieces across the air gap so that he could email them as attachments with his questions. Unfortunately, it sounds like the USB drive was used for backup as well and had more than just the snippets on it. After finding, testing, and deploying an exploit that would suck the contents out of the USB drive the next time it was inserted, the attackers just needed to wait until the next time the developer had a question.

Once they had the contents from the USB they ended their IO and reported out to their client. However, had they been dedicated to ongoing operations against the target organization it is not inconceivable that they could have gone further than just retrieving data from the USB. A planful low and slow continuation would probably have kept the USB-copy-and-retrieve going until it had panned out and they were no longer retrieving significant new information. With that vein mined, they might have escalated the level of detection they were willing to risk and try to deploy an exploit across the air gap by writing to the USB drive the next time it was inserted (that’s not my idea, it was quickly suggested by members of the audience who sounded like they had a good idea of how it would be done). With a long view, a cleverly designed set of USB-transported exploits, and those occasional sneaker-enabled transits and you’ve got an effective measured impedance between those two networks of near zero.

Obviously it is interesting that the air gapped network was as vulnerable as it turned out to be. I’m sure a government on government exercise like this would have a lot of people thinking hard about existing assumptions. But perhaps more interesting is the fact that almost all of the exploits used were purpose built for this operation and so were completely invisible to AV signature matching or rules based detection schemes. Well trained, motivated, and aggressive internal analysts with the right tools might have discovered what was going on, but no automated tool was likely to as there would have been no pattern or signature for them to match against.

I am curious how many “typical” intrusion attempts were discovered and warded off during the same period by the target. The “usual stuff”, by creating a high noise baseline that keeps defenders feeling like they are accomplishing something with their automated tools would probably help hide the signals of a determined and unique operation such as this one.

• • •

Culture in Cyber Command

I love the fact that Gen Lord is talking about cultural differences between the nascent cyber command and the rest of the service. Culture will certainly matter in this space where it isn’t just about operationalizing what we already know, but is about out innovating the enemy. The rest of the article goes on to talk about the politics of location. I hope that the impact of location on culture and the ability of the new command to make connections outside of the traditional band of merry metal benders is being considered as well.

• • •

Campaigning, Technology, and Supporter Generated Content

I’m accustomed to receiving invitations to donate money to political campaigns but today I received an invitation to participate more directly in a calling campaign as part of Obama’s virtualized campaign phonebank. Of course McCain and Hillary have them too. It’s not a groundbreaking use of technology per se, but it feels groundbreaking in the political sphere where we are generally expected to passively absorb.

They aren’t really doing much more than providing a number to call and a script. Perhaps if they provided an embedded VoIP client they wouldn’t have to include this statement about costs (which I guess are essentially contributions that are beyond contribution accounting).

Hillaryclintoncom_make_calls

Obama’s site is noteworthy for leveraging the power of game mechanics and creating an opportunity to be recognized as an uber supporter.

Barack_obama_____change_we_can_beli

Though these virtual phonebanks are simple uses of technology, I hope they are a hint of the power of networks to create a more participative future political environment.

• • •

FCS, SOSCOE, and the Big Bang

It bums me out to read statements like this one in this article about FCS/SOSCOE:

The software program “started prematurely. They didn’t have a solid knowledge base,” said Bill Graveline, a GAO official involved in the government’s ongoing review. “They didn’t really understand the requirements.”



That isn’t to say that I think FCS/SOSCOE is on track and being developed the best way, it’s just that I think these kinds of statements perpetuate the idea that software of this magnitude should be written as a Big Bang after every requirement is fully understood. There are 3000 developers working nine years at a cost of $6B and the expected value curve is supposed to look like this:

Sudden_value_2

Contrast this with something like Linux where value has tracked much more closely with effort:

Gradual_2

Why the difference? Well, unlike an open source project like Linux that slowly moves its way up the food chain from departmental web servers to mission critical applications as it matures, the government acquisition system tends to assume that at 8 years 364 days SOSCOE is an entry in an earned value report. Then, suddenly as the calendar turns over 9 years it is hatched as a fully functioning completed system ready for operational deployment. In the meantime, as it hasn’t been “delivered” it isn’t available to be used anywhere.

I would love to see large scale software developments like this thought of in much more incremental / evolutionary terms. I’d also like to see a greater degree of transparency and openness so that incremental value could be provided along the way, even to completely unrelated programs. After all, many of the component parts of SOSCOE are lego blocks that could be readily used in other environments.

In fact, my first graph is probably completely wrong. Without the hardening that comes from incremental use it is much more likely that the budgeted 9 years / $6B grows dramatically (like it did for Vista) before the value bit can be flipped.