Open Technology

I can’t believe it has been a month since my last post. Blogging is a bit like taking vitamins every morning; if you forget for a day suddenly you realize you’ve forgotten for a month.

I attended the AFEI Open Technology conference in DC last week and am still absorbing what I heard.

Chuck Riechers, Principal Deputy Assistant Secretary of the Air Force for Acquisition, in his keynote described a three step plan for the DoD’s adoption of open source / open technology. Start with the acceptance of open standards, interfaces, and data (crawl) before moving toward the adoption of Open Source Software concepts and methods for internal DoD developments (walk), and finally leverage shared source repositories (run).

Unfortunately, the rest of the conference seemed stuck in the rut of “Really, you can use open source software like COTS.” It is difficult to engage in a discussion of the more complex use of OSS concepts and approaches to build DoD infrastructure when the culture is still absorbing the more basic idea of using COTS OSS (e.g. “really, you are allowed to crawl”).

The standard concerns of “who will support it?”, “Is it secure?”, “How can I control feature direction?”, and “How can we get it certified and accredited if no vendor is advocating for it?” keep coming up, and will keep coming up for a long time. These and similar concerns seem to be rehashed ad nauseum at each open technology discussion that I have attended in the last year leaving little time for discussion of the more interesting “walk” and “run” stages.

I guess I shouldn’t be surprised. It has been less than a decade since the idea of using any COTS systems really started to get some traction in the DoD. For instance it was only in the early 90’s that systems like the predecessor of GCCS were running on purpose-built Honeywell hardware. The Army’s AFATDS fire control system shipped for the first time only a few years ago with a 1980’s era TP4-based proprietary networking system. In the Navy it is still a very early trend to use open architecture shipboard. Historically everything from silicon to disk drives were designed from scratch for each new capability that came down the pike.

There are some positive trends.

Brig Gen Justice from C3T at Fort Monmouth discussed the fact that recent versions of FBCB2 are now based on Linux. Linux seems to be turning up everywhere including in the foundation of the Army’s System of Systems Common Operating Environment (SOSCOE).

Despite these wins I mostly just want to say “so what.”

In the commercial world there is a well defined trend toward cost sharing as the infrastructure stack moves to open source. From core IP network elements to the OS and now onto higher-order middleware infrastructure. The trend is for widely used shared infrastructure elements to go to open source.

Where is this trend in the DoD? It is one thing to consume from the bottom of the commercial OSS infrastructure stack, but the DoD has some mission-specific infrastructure needs. When will that DoD-specific infrastructure stack start being addressed in a more open way?

Take something like SOSCOE as an example.

SOSCOE is supposed to be the Army’s foundation for an entire eco-system of applications and capabilties across the wide-ranging FCS program (and maybe, ultimately, wider). Essentially it is a traditional operating system with intermittent P2P networking and service abstraction built in (maybe an over-simplification, but that’s basically it).

Imagine if SOSCOE were an open source initiative being developed incrementally to satisfy near term requirements with source-level involvement from the FCS application development community (and those capabilities were going to be rapidly fielded in Strykers and elsewhere). In other words, imagine if it were being developed the way Linux was. Short incremental versions that actually go into production with a high degree of involvement from the consuming / contributing community throughout the process of development.

The benefits of such an approach would include:
– a better more bullet proof product whose usefulness was proved at every step of development
– an opportunity to gain the value from early increments, and prove those increments while the software moves up the capability curve
– an engaged development community whose insights and inputs from actually using the product would feedback in the form of useful code and design insights
– a much more even playing field for competition within the developer community / SOSCOE ecosystem resulting in lower cost and better capability for the Army.

Unfortunately it doesn’t work this way. The Army won’t even “own” SOSCOE until it is “finished” and the DD250 (Material Inspection and Receipt Form) is signed years from now. In the meantime incremental “releases” are being delivered to the government, but their use and distribution is severely restricted.

Imagine if Linux had been in development for ten years, with increments being inspected and tested on a limited basis but were restricted from being used in a production environment, and then it was suddenly shipped to millions of users and placed into production in data centers all over the country.

The way the acquisition rules are being interpreted for software infrastructure places an unreal expectation that there is a moment in time where it is done (DD250 signing day); before which it can’t be used, and after which it is widely used with maximum capability.

There are a number of reasons why things like SOSCOE aren’t developed in the DoD the way they would (or could) be in the outside world. But maybe the most fundamental reason has nothing to do with technology or the acquistion rules per se. It fundamentally boils down to incentives.

In the commercial world markets are not bound. The pie can get bigger. In such an environment it makes perfect sense for IBM to support the development of Apache or for Sun to support OpenESB. By commoditizing the infrastructure in a cost-sharing way they can focus their efforts higher up the stack and build a model around support, integration, and higher-order dependent products.

In another example, it makes perfect sense for a variety of competitive Wall Street firms to collaborate in the development of the commodity messaging platform AMQ. They can spend cost-shared dollars to achieve a highly scalable commodity product.

To look briefly at one more model, the dynamics of today’s commercial markets also make the model(s) pursued by JBOSS and OpenSQL make sense as well. Essentially these companies conduct near-traditional product development but open the process to intentionally achieve commoditization of the product for wide distribution and then make money on ancillary services or related products (and also gain the benefits of an engaged participating user group).

Would any of these models make sense for the DoD in the context of infrastructure such as SOSCOE?

Could Boeing act like IBM or Sun and essentially manage an open source community (either gated within the DoD contracting community or wide open) to “layer” the needed additional capabilities on top of Linux?

Could they act like JBOSS and essentially do what they are doing now but open the repository and process to the community (again, gated or not depending on the sensitivity of the effort)?

Or, could Boeing be eliminated from the process altogether by simply having the FCS application development community coordinate their own efforts to develop some or all of SOSCOE as an OSS collaborative community?

I will address the advantages and barriers to each of these models as the basis for future posts.

Leave a Reply

Your email address will not be published / Required fields are marked *