Open Technology III – pushback

I had a reality check moment this week in my thinking about “GLOSS” (a term I invented in this previous post to describe a DoD gated open source project) during a conversation at an Air Force Research Lab. In so many words I was let on to the likelihood that I’m pretty much clueless (the occasional reminder never hurts). Despite the fact that my credibility as an engineer was openly questioned, I appreciated the discussion, as heated as it was, because it helped me to constrain and inform the idea of gated open source in the DoD (GLOSS) with the very real cultural, contractual, and legal barriers to implementing it. Better to know the reality and face it than to blindly forge ahead and get no where.

The conversation occurred with researchers who have developed a very sophisticated component of a federated training environment. In the midst of a conversation about the difficulty of coordinating the ongoing development of such a complex product with a broad community of stakeholders, I naively asked whether they had considered an open-source-styled collaboration approach to its ongoing development. I suggested that this would let the core team continue to focus on the fundamental capabilities while members of a wider community might provide value in the integration frameworks, human interface, and in other areas where the lab’s core team might not have deep expertise. Furthermore, I suggested that by essentially commoditizing the capability within this specialized community, the government would be able to get better return on its investment by encouraging wider adoption. I suggested that this approach would simultaneously provide the very disciplined configuration management of the source code that they desired.

Some quick background might be useful to give some perspective. The product under discussion has taken approximately four years to develop with a team of roughly six people and was built largely to meet the internal needs of the lab’s testing and research. However, it has proven to have great utility and wider applicability. The developers for the most part have been research scientists and and have focused on complex physical world emulation while paying less attention to the non-core elements of the capability (they would probably argue this point).

The software is very sophisticated and effective at modeling the complex physical processes it is designed to model, but, as we discovered in a recent project to integrate it with other systems, it suffers from some typical “lab architecture.” Architecturally it reminds me a bit of a project I worked on in the 90’s that was derived from research at Carnegie Mellon University’s speech recognition and AI labs. Insanely sophisticated internals wrapped in what can only be described as a naive application and integration architecture.

So, back to what I learned the other day.

One of the key push backs in this case to an open source approach (even a gated community) is the perception that opening the code will result in a loss of control. In one sense the lab is concerned that by opening the code they will no longer have control over the capability’s fundamental direction. However, perhaps a greater concern is their perception that by opening it they will simply pave the way for an unscrupulous contractor to take the code and figure out a way to sell it back to the government. Given that this is well within the realm of possibility I can appreciate the vehemence with which this objection was offered.

There also seemed to be an inordinate amount of concern about the cost of managing an open source community style of working. I think really this issue was linked to the loss of control and was a surrogate argument for keeping all of the development work (and associated funding) at the lab.

There would of course be costs associated with managing such a community; from infrastructure to community management, so it would only make sense if the gains offset the cost. This may be a case of cost – value mismatch in the sense that lab might perceive from a budget point of view that it is absorbing all of the cost while benefits are largely distributed outside the lab. This may or may or not be true, but even if it is it is probably a case of a local minimum short circuiting a global maximum.

An important aspect of this capability is the need to protect the IP from foreign disclosure. Little of the application code is classified (if any; though the parametric data associated with it certainly is) but it would definitely be subject to export restrictions. So another significant issue raised by the lab is the overhead associated with verifying the credentials of potential community members. The need to ensure that only authorized and authenticated individuals can gain access to the source repository is well founded. However, they were equally concerned with verifying the software development skills of potential community members and contributors; they did not understand the hierarchy of involvement within an open source community and the idea that commit privileges are earned over time.

Finally, they took exception to my position that commoditization of this capability would result in wider adoption within the targeted community. The issue here was that a number of contractors are also developing products that at least overlap in the space that this capability is in. The lab claims that they are regularly threatened with lawsuits from angry contractors who claim that the government is unfairly competing with them if they field this software outside of a pure R+D environment. I agree that this is an issue that cannot be ignored, and the government spending tax dollars to field open sourced code in a “public works” mode will always raise the ire of contractors. But, that same government built a network of super highways when the country was already well served by a network of rails. I guess my reaction to this concern is if it is a “public good” build it and deal with the lawsuit.

To summarize the issues raised:
– It is expensive and difficult to manage the community; and if community members (other programs) submit code they are going to demand that it belong to them regardless of the “GGL/GLOSS” licensing.
– The system is very sensitive from a classification and distribution point of view; this will make it even more difficult to create a community.
– There are significant Government / Contractor competitive issues with this approach.
– We will lose control over product direction and the source.

More grist for the mill.

Comments

  1. Chris Gunderson - April 10, 2007 @ 8:23 am

    Jim’s captured a great example of DoD’s “netcentric knowing/doing gap”. “Netcentric Operations” is the DoD term that corresponds to best practices associated with distributed, collaborative, adaptive, enterprises on the Internet. For example, open source communities do “netcentric” software development. Dell does “netcentric” manufacturing. FedEx does “netcentric” supply chain management. DoD wants to do “netcentric” information management to achieve an advantage over its enemies, like Al Qaeda.

    DoD leaders are talking a good game about need to create “flat” processes that encourage innovative risk taking and enable cross program collaboration. However, the current DoD ultra conservative hierarchal structure, which is reinforced by regulations, long standing culture, and incentive models, is an anthem to that notion. Jeffrey Pfeffer’s book, The Knowing Doing Gap: How Smart Companies Turn Knowledge into Action, explains how despite clear understanding of the need for deep change, most organizations are incapable of getting out of their own way to bring it about. Einstein might put it another way: it is unlikely that the same set of parameters and boundary conditions that created a problem will provide the solution. The evidence in Jim’s blog posting supports both characterizations of the issue vis-à-vis DoD and its stated need to adopt open, collaborative, (i.e. “netcentric”) technology development.

    Whether you prefer Pfeffer’s or Einstein’s characterization, IMHO creating a flat innovative process among longstanding conservative hierarchies, will require stepping outside the conservative hierarchy. That is, DoD must deliberately leave the bounds of the Federal Acquisition Regulations to look for netcentric solutions. U.S. Code Title 10, which defines the mission of the U.S. Military, recognizes the need for occasional departures from standard acquisition approaches in a number of its articles. The structure of independent, commercial, not-for-profit organizations has been successfully employed by both the commercial Internet communities, and the U.S. government for rapid discovery and standardization of best practices. Many years ago DoD created the not-for-profit, industrial, Federally Funded Research and Development Centers (FFRDC) to solve hard problems in an independent and unconstrained environment. It’s time for DoD to nurture a new kind of “.org” appropriate for the age of the Global Information Grid.

  2. Brad Cox - April 10, 2007 @ 7:04 pm

    Jim, your blog is precisely tracking my own thinking of late. I find this thought experiment helpful. Pick any successful interoperability instance (plumbing parts for example) and ask “Why did interop succeed here and not in DOD IT?”.

    In every example I’ve tried, the answer’s always the same. The parties on both sides of that interface how to make more $$ by agreeing on an interface spec than they could by going their own way. $$ can mean dollars but that’s not essential; it could be any level of Maslow’s pyramid that the parties really care about.

    When that’s true, the two sides act like opposed magnetic poles; can’t avoid cooperating even if they wanted to. When that’s false, its the opposite; they couldn’t agree *even if they wanted to*.

    (One of) DOD’s problems is that the “what’s in it for me” rule leads to opposing poles for digital and/or intellectual property. DOD clearly wants and needs interoperability. But one party’s gain is not enough to make it happen. Contractors see standard interfaces as time wasted agreeing on specs and *less* opportunity for $$ when things work off-the-shelf.

    JBOSS/RedHat/Apache have found alternative models that do seem to work. The reputation model works for individuals and arguably could be made to work for contractors (that is basically my pitch to Binary). There are many others, a radical one being to meter use of the ESB and pay contributors from the revenue. Probably are other business models I’ve missed.

    Bottom line, such “what’s in it for me” issues are a social binding
    force like gravity; the weakest of all newtonian forces, but by far the longest-range and thus utterly unescapable.

    Until this question is crisply addressed to all parties satisfaction, we have is a glider; easily lifted by perturbations around it. But gravity ultimately has its way. For something that can haul large loads safely, you’ve gotta add an engine and put fuel in the tank.

    In other words the open source notion needs a crisp and credible elevator speech that answers the “what’s in it for me” question. Else gravity will have its way, regardless of everything else.

Leave a Reply

Your email address will not be published / Required fields are marked *