The Cybernetic Stall – Emergence isn’t Optional

I’ve been thinking a lot lately about Intentional Emergence in the defense IT enterprise. I advocate for emergent approaches like open source because I don’t think buckets of money can continue to make up for central planning’s failures, because it’s frustrating to work in an environment where good ideas languish, and, if I’m honest, because I think it would just be a lot more fun if the enterprise felt more like the web. Lately however, I’ve become convinced that facilitating emergence with policy and practice isn’t just nice to have, it’s absolutely necessary. In this post I’ll try to explain why. The argument goes like this:

People cooperate to make society function. In capitalist societies we do this by combining our efforts into companies and other organizations that are centrally planned and centrally controlled. However, beyond a certain scale central planning falls down. It gets unwieldy. So, instead of having one giant company that plans and controls everything (or a socialist government standing in for it), these independently planned entities interact in a market that isn’t centrally controlled, it’s emergent. The market has some simple rules and out of it emerge both prices and patterns of behavior.

Long before Complexity Theory, Social Physics, Chaos and all its about talk of basins of attraction and stuff like that, or before any of the modern offspring of Cybernetics came along, Friedrich Hayak wrote The Road to Serfdom. In it he made the argument that centrally planned economies would always result in authoritarianism and the failure of democracy (or more generally, freedom). The planned economies of Nazi Germany and the Soviet Union, though starting at opposite ends of the political spectrum, kindly reinforced his point by sliding into near mirror-image fascist states. The planner’s intentions made no difference and the Gulag filled up with well-intentioned old Bolsheviks.

So, what does this have to do with Defense Enterprise IT?

The Defense Enterprise is huge. Whether in terms of number of participants, dollars spent, assets, or any other measure, it’s massive. In fact, it has never really been a single enterprise if the definition “enterprise” includes the span of control of a single controlling entity. It has always been broken up into near-independent organizations with different missions, cultures, financials, and etc. to make each sub-enterprise span of control more reasonable. It’s scope and activities make it a lot like an economy filled with at least partially independent organizations.

Inside that super enterprise, individual IT systems (and system acquisitions) proceeded in near isolation. They had external requirements and policy to follow, but those changed slowly and all of the money, authority, and accountability for a given effort intersected at a single program office. The program office might have a devilishly complex task to accomplish, but at least it (mostly) controlled its own destiny.

Of course, I’m not describing a perfect world. Many of these programs were themselves very large by the standards of commercial enterprise. As program size and the necessary controls that go with size increased, a greater portion of their overall effort was consumed with controls instead of code. As a result acquisition cycle times and overhead have increased dramatically in the last thirty years, and so have failures. Also, given the lengths of time involved, by the time a system was delivered it often turned out that no one really cared about it anymore.

This has been true from the very beginning of the DoD’s IT enterprise. In fact the precedent was set by one of the earliest defense IT systems, the Semi Automatic Ground Environment (SAGE). It was a massive undertaking that required the simultaneous invention of the first multi-user digital computer (whirlwind) and a continent-wide radar system to go with it. Its completion was lauded as a testament to the newly devised methods of systems engineering, processes we are still living with today. What’s talked about less is that it took so long to deliver that it never served its intended purpose. By the time it was operational, strategic bombers had been supplanted by ICBM’s and the need to vector fighter defense on a continent-wide scale had gone away. SAGE became a useful but very expensive air traffic control system until the early 80’s when it was finally retired.

So, for many years we’ve had programs of such massive size that they have been pressing up against (and frequently exceeding) the limits of our ability to centrally plan and manage them. With that size has come a decline in the ratio of value achieved for the money spent while failure rates have increased. But, we could live with that because we had the richest economy on the planet and now and again, through sheer force of will, all of that systems engineering managed to spit out a useful system. Systems that, if their time had passed, could at least be used for some other related and useful purpose.

More recently however, the DoD has started a migration to NetCentricity. NetCentricity is a revolution in the use of information in warfare. If its proponents are right it will mark a discontinuity in warfare as big as the advent of armor or air warfare. Strictly speaking NetCentricity isn’t about technology, it is about the power of networked organizations, rapid information flow, and self-synchronization to serve as force multipliers and enhance the operational art of maneuver. Technology is a necessary enabler though and the important key to this discussion is the fact that NetCentricity requires systems all across the DoD super-enterprise to share information. For people to have shared awareness and self synchronization in the cognitive domain, their information systems must be connected.

The Software Engineering Institute at Carnegie Mellon University recently published a paper on ultra large systems (pdf) that goes into great detail about the challenges that come with this evolution – from thousands of independent large (or very large) systems into a single ultra large system fabric. I think one of its most important points is that Ultra Large Systems won’t be managed as a single effort, but instead will consist of operationally independent but connected nodes (see page 11). What this is saying without saying it, is that the era of pretending that the DoD IT Enterprise is a single enterprise is over. An enterprise is by definition the people, systems and processes that are under a single span of control. This statement is an acknowledgement that in a NetCentric ULS-operating DoD this can no longer be the case. So, if it’s not an enterprise, what is it then?

Let’s ask the question another way. What happens when hundreds of programs that were already so large that they were teetering at the limits of central planning suddenly get connected to each other? When huge programs that at least had nice clean edge conditions suddenly find themselves having to coordinate across boundaries that were once impermeable and fixed? Do you get an Ultra Large System that is too big to plan? Or do you still have hundreds of only Really Big systems that just tipped over?

Either way it’s a communication storm where suddenly every program manager is taxed with external communications about rapidly evolving boundary conditions (e.g. protocols, taxonomies, semantics, technologies, etc.). The speed at which those boundary conditions change, combined with the effort inherent in all that additional communication is the tipping point into a cybernetic stall. The DoD has become the proud owner of an ultra large system that is simply too large to survive and thrive as a centrally planned entity, but they are still trying to plan it.

Today the Army struggles to respond to this problem (within the still-limited scope of the Army enterprise) with its Software Blocking process. This is an effort to align all of the Army’s related systems into a single release schedule (i.e. blocks). Essentially it attempts to abstract the planning process so that it can span many large programs. But the reality is, what was once a hard problem is becoming an intractable one and while software blocking can help improve interoperability across the Army’s systems, it does it at the expense of speed. To stay in synch, all of the systems in a block have to keep pace with the slowest runner. A better plan would probably be to adopt emergence at the scale of each of those systems like Amazon has, with a combination of architecture, organization, and incentives.

In a more general sense, Software Blocking is part of a broader organizational reaction to the stall. It’s a reaction that seeks to exert more and more control in the face of uncertainty. Acquisition processes add steps to wring risk from the process, SEI issues updates to CMM that require more process documentation and evaluation, and old timers wax nostalgically about the days “when people would just take orders and do what they were told.” What those old timers are really nostalgic for are the days of clear lines of control in the hierarchy. As the hierarchy is replaced by a network both the clarity of accountability and the control that goes with it are lost.

In the military’s culture of order giving and taking, pushing for even more control in the face of failing programs is only natural, but it isn’t going to fix the problem. It’s like trying to sidestep Heisenburg’s uncertainty principle by squinting really hard when you look at an electron. It’s a fools errand, a Paradox of Control. Because it turns out that once you’re in a cybernetic stall, trying to de-risk with more planning and control will just make things worse. By the very nature of software, the slower you build it, and the more complete you try to make your plans before you start, the more real risk you create. Does anyone believe that the JCIDS process is enhancing warfighter effectiveness?

The web, unlike the expanding DoD super enterprise, has always been emergent. Since it was never really under anyone’s control it demonstrated emergence from the beginning – It didn’t need anyone to let go of control for it to get that way. As a result it has experienced a great deal of innovation – innovations that usually start small and in large quantities and then culled as they grow into a power law curve. The exact technologies of the web, or even the exact attributes that comprise its emergent eco-system, might not be the right ones to make the DoD emergent. After all, the DoD has different operational and environmental conditions. However, we would be dumb not to look at the web for ideas for the defense IT enterprise.

This is a conversation with consequences. Despite how it sounds, we’re not just bantering about esoteric theory. Let’s continue for just a moment with the analogy between the defense IT enterprise and the broader economic activity of the state. From The Road to Serfdom:

“It is no exaggeration to say that if we had had to rely on conscious central planning for the growth of our industrial system, it would never have reached the degree of differentiation, complexity, and flexibility it has attained. Compared with this method of solving the economic problem by means of de-centralization plus automatic coordination, the more obvious method of central direction is incredibly clumsy, primitive, and limited in scope. That the division of labor has reached the extent which makes modern civilization possible we owe to the fact that it did not have to be consciously created but that man tumbled on a method by which the division of labor could be extended far beyond the limits within which it could have been planned. Any further growth of its complexity, therefore, far from making central direction more necessary, makes it more important than ever that we should use a technique which does not depend on conscious control. (Emphasis added by me).

China had its Deng Xiaoping and Russia had Beria. Both of them saw (at different times) that central planning was leading to economic disaster. Xiaoping started China’s economy on the road to growth by permitting markets and encouraging the emergence that goes with them. Beria on the other hand didn’t survive the post-Stalin power shake out and as a result the Soviet Union just kept five year planning itself into oblivion.

The Defense IT Enterprise faces a similar crossroads. The scale of the super enterprise (and the Ultra Large System landscape inside it) exceeds the limits of central planning, yet the DoD is still trying to plan itself to success. The result is failing programs, a focus only on the big stuff, and obvious (and growing) digital serfdom for our troops on the ground. In general they are less connected and more constrained in their use and development of IT than their third world adversaries. And to add insult to injury, when they show up at disaster relief sites, they often find that the NGO’s have better situational awareness on the ground. Meanwhile the systems engineering machine keeps grinding out yesterwar’s systems.

This begs the question, who is the DoD’s Deng Xiaoping? The one that will break it out of the Paradox of Control and recognize that along with the planned “enterprise-like” components, there must be vast spaces of facilitated emergence? Most people doubt Mao meant it when he said to let a hundred flowers blossom, but the Defense IT establishment needs to embrace the notion in its policy and create an ecosystem that supports emergent behaviors, or continue to watch large scale system development falter, money get wasted, and our troops fight in digital squalor.

• • •

The Army, the Web, and the Case for Intentional Emergence


Lt. Gen. Sorenson gave a Higher Order Bit talk at the Web 2.0 Summit in San Francisco back in November. I didn't make it to the Summit this year but I'm glad I got to see the video. 

I'm glad General Sorenson is thinking about how the Army's systems and methods can be improved with Web 2.0 ideas and technologies but I wish the Army would really go after the really fundamental benefit of the Web, the fact that it is a platform that supports emergence. It's not just about the specific technologies, it's about the ecosystem of technology, economics, policy, and culture that supports rapid innovation on a generative platform. 

I think the Army can unleash a wave of innovation at the edge by replicating the web's generativity on the battlefield and a couple of California National Guard guys I met have proven it. They managed to get a single Linux box authorized for the SIPRNET in theater and quickly used it to build a collection of web applications called Combat Operations Information Network that scratched a bunch of itches for their unit.

As simple as it was (a single underpowered Linux machine on the network), once COIN was on the network it was a generative node and people lined up to get other problems solved and is now widely used across the theater.

I tell the rest of the story about my reaction to General Sorenson's talk and how the Army's Battle Command System can support innovations like COIN here at Radar. I'll just link to it rather than cross posting the rest of it.

Technorati Tags: , , ,

• • •

“DISA Inside” – NCES should be a SaaS Appliance

Disainside

The title of this post is not part of my secret plan to obscure meaning through the liberal use of acronyms. It really just came out that way.

Here’s what the acronyms mean:

DISA = Defense Information Systems Agency. The once and future DoD phone company, now also responsible for stuff like enterprise application hosting, service oriented architecture (SOA) infrastructure and the command and control systems that will use it. Today DISA’s center of gravity remains firmly in the data center.

NCES = Net Centric Enterprise Services. The SOA infrastructure that I mentioned above plus stuff like enterprise chat and search. Current state has a high ppt to compute ratio.

SaaS = Software as a Service. I know you already knew that one but I felt compelled to include it for completeness.

So, now on to the meat of the post.

If DISA is going to have relevance outside of the data center, and therefore relevance to the warfighter, it needs to have an impact on the experience at the warfighting network edge. Today’s data center focus combined with the realities of network availability at the edge makes that unlikely. The NCES’ core services of messaging, security, search, and things like that are simply going to be useless to the warfighter at the end of an intermittent or low bandwidth network. This is really nothing new as everyone already recognizes the difficulty in supporting shipboard command and control systems with remotely provisioned NCES services. What’s missing today though is a focused strategy to obtain relevancy at the edge.

This article’s mention of SaaS appliances reminded me of a conversation I had with DISA engineers about two years ago. The gist of the conversation was “what if you guys were to think like a company and stretch NCES out to the edge by building (or commissioning) a line of NCES appliances as a combined product and services offering? You could build them into the two or three common form factors so that they could fit into server racks (like in a Tactical Operations Center) or into a vehicle (in an Integrated Computer System form factor). Then design them so that you could add value by remotely providing systems management, domain spanning messaging, and things like that. Finally, paint them white and put big blue ‘DISA Inside’ logos on them so everyone knows they came from you.”

The appliances would allow for modular software deployment (including messaging and message routing, mediation, information assurance functions, search and etc.) either as different appliances or as independent functions within a single appliance depending on the deployment environment. Relying on local caching it would serve as a complete (if narrowly aware) NCES environment on the battlefield when intermittent network realities left it disconnected from the borg, but it would intelligently re-synch when connected.

Done right, I can imagine all kinds of programs spec’ing such an appliance as both a core piece of local infrastructure and also as the bridge back to the enterprise. Army Battle Command System, the USAF Objective Gateway, Future Combat System, Shipboard command and control systems, USAF Air Operations Center, just to name a few, could all use common services packaged this way and whose enterprise awareness seamlessly expanded and collapsed as network availability allowed.

So what do I think “done right’ means? I think it means an adoption-friendly appliance line based on an open stack that all of those consuming programs can inspect and contribute to. I imagine a DISA-managed collaborative ecosystem of related open communities delivering the stack and service components into a related ecosystem of appliance hardware vendors. In a perfect world the collaborative eco-system would be market driven and would span the DoD’s garden walls. By avoiding excessive gating it would serve as an effective two way technology transfer mechanism into and out of the defense establishment; stack components coming in technologies like context-aware message routers flowing out.

I know this probably sounds kind as crazy to the DoD establishment as it does mundane to the open source world outside it. It’s all a matter of perspective I guess. You have to dream it to do it.

• • •

New Open Source Project Supports Contextual Collaboration

Cartoon

After about a month of preparation I’m thrilled to announce a new open source project to build support for contextual collaboration. The project is called rVooz (a contraction on rendezvous) can be found at www.rvooz.org.

From the rVooz web site:

rVooz is a software suite designed to make contextual connections, or “contextions,” between people who may or may not have a priori knowledge of each other. It is designed to bring people together even if they don’t have each other in their buddy lists or know each other’s phone numbers.

The rVooz suite consists of software clients that post context, a Salient Server which finds context matches, and Voozers that coordinate the connections by distributing presence or starting sessions…

To understand what this means, imagine looking at a web page and seeing all of the other people looking at that web page added to your IM client buddy list in real time (and removed when you leave). Or, in a military context, imagine that you are reviewing an airspace, a target, an area of interest, or some other context and all of the other operators working the same context (in whatever system they are using) are dynamically added to your buddy list. This is just the beginning, in addition to “contextual dynamic presence”, rVooz may also be leveraged to dynamically establish VoIP sessions without any of the parties knowing each other’s phone numbers or SIP addresses in advance.

The project is interesting as it may be the first free and open source project funded from day one by the Department of Defense (or it might not be, hard to tell!). It is also interesting because it has been selected to be as appealing to a non-DoD audience as it is to the DoD. If this turns out to be true (I am sure hoping so) it will open up a really interesting chapter of collaboration between two seemingly completely different domains.

The project is brand new but has the beginnings of the “Salient” back-end service in place and has started a “voozer” that works with the OpenFire Jabber/XMPP server. We’re hoping that as we continue to build out Salient the community will help us develop voozers for a variety of collaboration environments.

If it sounds interesting stop by www.rvooz.org and check it out.

• • •

DISA Announces $2.5B Fund for Netcentric Transactions

Fordisapost

In an effort to create a market dynamic to encourage NetCentricity within the Department of Defense Command and Control community, the Defense Information Systems Agency announced today that the $2.5B Net Enabled Command and Control program has been reprogrammed as the Defense Information Mobility Encouragement (DIME) fund.

The DISA press release describes DIME as a new approach to achieving NetCentricity; one where market incentives similar to those found in a commercial market will replace the centralized-program-oriented approach taken to date. It is a simple concept, beginning in FY 08 DIME will pay $.10 to any C2-related Program of Record for each and every service oriented transaction that it suppports during the five year period of the original NECC increment one. The idea is to encourage NetCentricity while leaving the door wide open for innovation by creating an additional funding stream for those programs that achieve adoption for their services.



Despite the bland bureaucratic language of the release, this is an amazing announcement. It is an unprecedented admission of the value of market forces in guiding co-evolutionary systems development in an enterprise too large to effectively centrally plan. While DIME doesn't eliminate policy and requirements such as Net Ready Key Performance Parameters, it fundamentally changes the drivers to achieve compliance. The faster a program operationalizes services, the faster it can start servicing transactions and get paid. Though the NRKPP's will still be verified, the spirit of the NRKPP's will be primarily tested by transaction adoption and volume.




DISA doesn't say it, but I suspect that they are also hoping that this approach, by being a funds multiplier to programs that are serving a broad customer base, will reward well-managed programs at the expense of those that don't grow their base.




I don't think it will take long for a more granular payment scheme to evolve as this approach will clearly benefit high frequency services such as situational awareness more than lower-frequency capabilities such as planning. I'm sure DISA will have to be nimble to evolve the program as high transaction rate designs are floated to game the fund, but despite these nits, I applaud DISA for taking a step that recognizes a dime of incentive can be more effective at achieving their goals than pounds of policy.




Implied in the new fund is the sheer scale of DISA's expectations; $2.5B will pay for 25 billion transactions.

🙂

Update (11/6/07): I guess the smiley face wasn't enough so it's time to come right out and say that this post is a farce. There is no "DIME" program but I can't help but think that a little bit of Adam Smith Invisible Hand would go a long way toward reducing the need for complex centralized planning as we move toward NetCentric systems. The question is, what simple incentives might make viable substitutes for the missing market economy that serves as that hand throughout the rest of our economy? Note the comment on centralized planning here.

• • •

Hey DoD, Enough About SOA Already!

I know I’m about to dis a sacred cow, but here goes…

There is this widespread fiction in the DoD that if all the IT systems just get SOA religion, seamless interoperability and operator nirvana must follow. I hear it over and over, “if the 3rd party applications or ‘capability modules’ ‘expose their data’ via SOA interfaces we’ll be able to readily put together composite mission threads, workflows, applications, and etc.”

So…, if I have a command center full of independent client server applications today, written in different languages, and built on a variety of DBMS implementations, how do I modernize them to meet the user’s real needs? Answer: “Make them SOA.”



Nope. Insufficient answer.




Adding web services to an aged client server application, that instantiates an obsolete process, and that has no ability to participate in a composite application environment, does not create NetCentricity (whatever that is exactly). The broad trend of opening up application stovepipes via service orientation to permit much wider and faster information sharing is absolutely critical, but it isn’t by itself sufficient.

Yet, architects and their architectures seem to be fixated on “NetCentricity” as manifested in the term “SOA” while all the other things that go into making a viable enterprise seem forgotten; including the actual meaning of the term NetCentricity. In reality, there is an entire stack from silicon to OS to middleware to application frameworks that support a successful capability (plus other non-stack things like a consistent and effective user experience). Somehow we seem to have forgotten all of that in our rush to genuflect before the SOA god.

If “access to the data” was all that mattered, we could solve that problem by storing everything in one big spreadsheet on a network drive (rumor has it, that’s been done); but obviously, there is more to an effective capability than that. And regarding the definition of NetCentricity… I think you’ll know you are NetCentric when with the support of technology you fight differently; not when your already existing processes have been automated.

I know some people will say all the other stuff besides SOA is already being accounted for in the available guidance. Well, that may be true, but the slides I keep seeing rarely seem to mention anything but SOA. It’s as if user applications are going to spring fully formed from the silicon as soon as this ambitious but ambiguous SOA materializes. “We won’t build applications anymore, we’ll build web services and operators will consume them.” No they won’t, SOAP is for machines.

Maybe a concrete example is in order. Let’s say one of the services wants to build a new airspace management and de-confliction capability. Today what would most likely happen is that a “three-tier in two-box” (UI/App + DBMS) application would be delivered to run in a workstation (i.e. significant client code running on a user’s workstation).

It probably wouldn’t be shipped as a disk, it would be shipped in a green plastic shipping container with all the servers, monitors, and maybe even networking gear that was required. If there was a requirement for a “thin client”, the application layer might also support a web client; but that would still most likely be provided from that one box running its own web server.

To meet existing “NetCentric” requirements the client workstation, running both the presentation and “middle” tiers, would probably just expose some web services for things like “get airspace” or “request de-conflicted flight path” (or maybe “run this query: query string”). These services could run from a single, non-horizontally-scaling workstation-class box on the command center floor while also serving the needs of the primary local user, and still satisfy NetCentric guidance today.

The result of this approach would be an inability to horizontally scale the back end of this thing in a data center-like environment (for fun, count the number of unique stand alone workstations on a wall street brokerage firm trading floor, now count them at a DoD command center). Additionally, equipment purchases would likely be tied to the program directly making enterprise-wide sourcing of infrastructure difficult. Integration of the capability into processes or workflows would be dependent on some external application to integrate them at the “SOA” level and would require significant application-level coding (can’t “mash it up” at the user level). Finally, there would be no ready way to add components of the new capability at the presentation layer to other composite applications. In short, you get a workstation or two with one user at each one, and unless additional applications are built to consume the exposed services, no one else gets to play.

I think this also illustrates one of the issues associated with a Capability Description Document (CDD). A CDD that would be satisfied by the scenario illustrated above probably would say something like “command center must have capability to develop airspace plan and perform dynamic de-confliction” or something like that. It says nothing about how many users should have access to that capability or how they might be able to integrate the capabilities into their various workflows or composite application environments. It assumes if the capability is “in the building” the requirement is met.

But…

What if the developer of this capability didn’t have to spend money on the basic presentation-layer functions at all? What if the capability was going to be deployed into a horizontally scalable server environment that established an application framework including geo-spatial presentation, micro formats, JSR-168, or some Rich Internet Application framework? In other words, what if there was a focus on infrastructure to support presentation-layer integration in addition to the enterprise application integration focus usually implied by “SOA?” Or put another way, what if there was a “facebook or Salesforce-like platform for C2″ that the capability developer was targeting for deployment?

The capability developer would then be able to focus on the core of the functionality and key user interface components knowing that the physical and software architecture for meaningful integration into the command center plus a bunch of available pre-integrations to other data sources would be provided. Build the application logic, write your widgets, integrate to the existing data sources via nice clean api’s and deploy.

The Department certainly must continue to press for service enablement of current and planned line of business applications in support of enterprise integration, web enablement, and the broader idea of NetCentricity. But let’s recognize that this is only a piece of the enterprise architecture conversation. And please, as October approaches, remember that you can’t buy a SOA, even when a “SOA vendor” approaches you at the end of a fiscal year with a deal you can’t refuse.

Hmmm, I know this is getting long, but just one more thing. You know, maybe the problem is that we expect the words words “SOA” and “NetCentricity” to convey too many kinds of meaning, and as a result they convey none. Enterprise integration, a style of architecture, a class of integration products, a kind of non-hierarchical human interaction, and maybe even AJAX programming styles and lightweight web interfaces all are somehow being crammed into these two words. Outside the DoD we would use many more words to describe these ideas including at least: Enterprise Integration (MOM, ETL, ESB, …), SOA, Web 2.0, Collaboration, Social Networking, content syndication, Rich Internet Applications, Enterprise 2.0 and so on…

After all, the way we think is shaped by the words available to us. Oceania introduced (and kept pruning) Newspeak to make its citizens dumb. Limiting ourselves to SOA/NetCentricity-speak is doing the same thing to us.

• • •