The Cybernetic Stall – Emergence isn’t Optional

I’ve been thinking a lot lately about Intentional Emergence in the defense IT enterprise. I advocate for emergent approaches like open source because I don’t think buckets of money can continue to make up for central planning’s failures, because it’s frustrating to work in an environment where good ideas languish, and, if I’m honest, because I think it would just be a lot more fun if the enterprise felt more like the web. Lately however, I’ve become convinced that facilitating emergence with policy and practice isn’t just nice to have, it’s absolutely necessary. In this post I’ll try to explain why. The argument goes like this:

People cooperate to make society function. In capitalist societies we do this by combining our efforts into companies and other organizations that are centrally planned and centrally controlled. However, beyond a certain scale central planning falls down. It gets unwieldy. So, instead of having one giant company that plans and controls everything (or a socialist government standing in for it), these independently planned entities interact in a market that isn’t centrally controlled, it’s emergent. The market has some simple rules and out of it emerge both prices and patterns of behavior.

Long before Complexity Theory, Social Physics, Chaos and all its about talk of basins of attraction and stuff like that, or before any of the modern offspring of Cybernetics came along, Friedrich Hayak wrote The Road to Serfdom. In it he made the argument that centrally planned economies would always result in authoritarianism and the failure of democracy (or more generally, freedom). The planned economies of Nazi Germany and the Soviet Union, though starting at opposite ends of the political spectrum, kindly reinforced his point by sliding into near mirror-image fascist states. The planner’s intentions made no difference and the Gulag filled up with well-intentioned old Bolsheviks.

So, what does this have to do with Defense Enterprise IT?

The Defense Enterprise is huge. Whether in terms of number of participants, dollars spent, assets, or any other measure, it’s massive. In fact, it has never really been a single enterprise if the definition “enterprise” includes the span of control of a single controlling entity. It has always been broken up into near-independent organizations with different missions, cultures, financials, and etc. to make each sub-enterprise span of control more reasonable. It’s scope and activities make it a lot like an economy filled with at least partially independent organizations.

Inside that super enterprise, individual IT systems (and system acquisitions) proceeded in near isolation. They had external requirements and policy to follow, but those changed slowly and all of the money, authority, and accountability for a given effort intersected at a single program office. The program office might have a devilishly complex task to accomplish, but at least it (mostly) controlled its own destiny.

Of course, I’m not describing a perfect world. Many of these programs were themselves very large by the standards of commercial enterprise. As program size and the necessary controls that go with size increased, a greater portion of their overall effort was consumed with controls instead of code. As a result acquisition cycle times and overhead have increased dramatically in the last thirty years, and so have failures. Also, given the lengths of time involved, by the time a system was delivered it often turned out that no one really cared about it anymore.

This has been true from the very beginning of the DoD’s IT enterprise. In fact the precedent was set by one of the earliest defense IT systems, the Semi Automatic Ground Environment (SAGE). It was a massive undertaking that required the simultaneous invention of the first multi-user digital computer (whirlwind) and a continent-wide radar system to go with it. Its completion was lauded as a testament to the newly devised methods of systems engineering, processes we are still living with today. What’s talked about less is that it took so long to deliver that it never served its intended purpose. By the time it was operational, strategic bombers had been supplanted by ICBM’s and the need to vector fighter defense on a continent-wide scale had gone away. SAGE became a useful but very expensive air traffic control system until the early 80’s when it was finally retired.

So, for many years we’ve had programs of such massive size that they have been pressing up against (and frequently exceeding) the limits of our ability to centrally plan and manage them. With that size has come a decline in the ratio of value achieved for the money spent while failure rates have increased. But, we could live with that because we had the richest economy on the planet and now and again, through sheer force of will, all of that systems engineering managed to spit out a useful system. Systems that, if their time had passed, could at least be used for some other related and useful purpose.

More recently however, the DoD has started a migration to NetCentricity. NetCentricity is a revolution in the use of information in warfare. If its proponents are right it will mark a discontinuity in warfare as big as the advent of armor or air warfare. Strictly speaking NetCentricity isn’t about technology, it is about the power of networked organizations, rapid information flow, and self-synchronization to serve as force multipliers and enhance the operational art of maneuver. Technology is a necessary enabler though and the important key to this discussion is the fact that NetCentricity requires systems all across the DoD super-enterprise to share information. For people to have shared awareness and self synchronization in the cognitive domain, their information systems must be connected.

The Software Engineering Institute at Carnegie Mellon University recently published a paper on ultra large systems (pdf) that goes into great detail about the challenges that come with this evolution – from thousands of independent large (or very large) systems into a single ultra large system fabric. I think one of its most important points is that Ultra Large Systems won’t be managed as a single effort, but instead will consist of operationally independent but connected nodes (see page 11). What this is saying without saying it, is that the era of pretending that the DoD IT Enterprise is a single enterprise is over. An enterprise is by definition the people, systems and processes that are under a single span of control. This statement is an acknowledgement that in a NetCentric ULS-operating DoD this can no longer be the case. So, if it’s not an enterprise, what is it then?

Let’s ask the question another way. What happens when hundreds of programs that were already so large that they were teetering at the limits of central planning suddenly get connected to each other? When huge programs that at least had nice clean edge conditions suddenly find themselves having to coordinate across boundaries that were once impermeable and fixed? Do you get an Ultra Large System that is too big to plan? Or do you still have hundreds of only Really Big systems that just tipped over?

Either way it’s a communication storm where suddenly every program manager is taxed with external communications about rapidly evolving boundary conditions (e.g. protocols, taxonomies, semantics, technologies, etc.). The speed at which those boundary conditions change, combined with the effort inherent in all that additional communication is the tipping point into a cybernetic stall. The DoD has become the proud owner of an ultra large system that is simply too large to survive and thrive as a centrally planned entity, but they are still trying to plan it.

Today the Army struggles to respond to this problem (within the still-limited scope of the Army enterprise) with its Software Blocking process. This is an effort to align all of the Army’s related systems into a single release schedule (i.e. blocks). Essentially it attempts to abstract the planning process so that it can span many large programs. But the reality is, what was once a hard problem is becoming an intractable one and while software blocking can help improve interoperability across the Army’s systems, it does it at the expense of speed. To stay in synch, all of the systems in a block have to keep pace with the slowest runner. A better plan would probably be to adopt emergence at the scale of each of those systems like Amazon has, with a combination of architecture, organization, and incentives.

In a more general sense, Software Blocking is part of a broader organizational reaction to the stall. It’s a reaction that seeks to exert more and more control in the face of uncertainty. Acquisition processes add steps to wring risk from the process, SEI issues updates to CMM that require more process documentation and evaluation, and old timers wax nostalgically about the days “when people would just take orders and do what they were told.” What those old timers are really nostalgic for are the days of clear lines of control in the hierarchy. As the hierarchy is replaced by a network both the clarity of accountability and the control that goes with it are lost.

In the military’s culture of order giving and taking, pushing for even more control in the face of failing programs is only natural, but it isn’t going to fix the problem. It’s like trying to sidestep Heisenburg’s uncertainty principle by squinting really hard when you look at an electron. It’s a fools errand, a Paradox of Control. Because it turns out that once you’re in a cybernetic stall, trying to de-risk with more planning and control will just make things worse. By the very nature of software, the slower you build it, and the more complete you try to make your plans before you start, the more real risk you create. Does anyone believe that the JCIDS process is enhancing warfighter effectiveness?

The web, unlike the expanding DoD super enterprise, has always been emergent. Since it was never really under anyone’s control it demonstrated emergence from the beginning – It didn’t need anyone to let go of control for it to get that way. As a result it has experienced a great deal of innovation – innovations that usually start small and in large quantities and then culled as they grow into a power law curve. The exact technologies of the web, or even the exact attributes that comprise its emergent eco-system, might not be the right ones to make the DoD emergent. After all, the DoD has different operational and environmental conditions. However, we would be dumb not to look at the web for ideas for the defense IT enterprise.

This is a conversation with consequences. Despite how it sounds, we’re not just bantering about esoteric theory. Let’s continue for just a moment with the analogy between the defense IT enterprise and the broader economic activity of the state. From The Road to Serfdom:

“It is no exaggeration to say that if we had had to rely on conscious central planning for the growth of our industrial system, it would never have reached the degree of differentiation, complexity, and flexibility it has attained. Compared with this method of solving the economic problem by means of de-centralization plus automatic coordination, the more obvious method of central direction is incredibly clumsy, primitive, and limited in scope. That the division of labor has reached the extent which makes modern civilization possible we owe to the fact that it did not have to be consciously created but that man tumbled on a method by which the division of labor could be extended far beyond the limits within which it could have been planned. Any further growth of its complexity, therefore, far from making central direction more necessary, makes it more important than ever that we should use a technique which does not depend on conscious control. (Emphasis added by me).

China had its Deng Xiaoping and Russia had Beria. Both of them saw (at different times) that central planning was leading to economic disaster. Xiaoping started China’s economy on the road to growth by permitting markets and encouraging the emergence that goes with them. Beria on the other hand didn’t survive the post-Stalin power shake out and as a result the Soviet Union just kept five year planning itself into oblivion.

The Defense IT Enterprise faces a similar crossroads. The scale of the super enterprise (and the Ultra Large System landscape inside it) exceeds the limits of central planning, yet the DoD is still trying to plan itself to success. The result is failing programs, a focus only on the big stuff, and obvious (and growing) digital serfdom for our troops on the ground. In general they are less connected and more constrained in their use and development of IT than their third world adversaries. And to add insult to injury, when they show up at disaster relief sites, they often find that the NGO’s have better situational awareness on the ground. Meanwhile the systems engineering machine keeps grinding out yesterwar’s systems.

This begs the question, who is the DoD’s Deng Xiaoping? The one that will break it out of the Paradox of Control and recognize that along with the planned “enterprise-like” components, there must be vast spaces of facilitated emergence? Most people doubt Mao meant it when he said to let a hundred flowers blossom, but the Defense IT establishment needs to embrace the notion in its policy and create an ecosystem that supports emergent behaviors, or continue to watch large scale system development falter, money get wasted, and our troops fight in digital squalor.

• • •

Open Technology Conference Wrap Up – Where the Geeks At?

Yesterday I sat on the panel that I referred to here. I thought I’d follow up with a brief post about one topic of our panel conversation.

To start the panel we were asked “what’s bugging us?” This started an interesting conversation about some specific open source roadblocks in defense. In particular, Bdale Garbee made the point that open source projects rely heavily on personal reputation. Even when major corporations participate in open source community, it’s the reputation of the individual that determines whether and how contributions make it into the project repository. People get commit rights, not companies. This can be problematic in the defense space.

I added that many key contributors to open source projects have self-selected to participate. The ability to self select is important to the ability of a project to find people with high levels of commitment and expertise. Look at the list of contributors on the Apache web server project for example. While there are certainly participants that represent major corporations, I would estimate from looking at the list that at least a third self selected. And that third is important as it is often the source of key (and difficult to find) skills. In fact, even many of the company sponsored contributors self selected and were later hired because of their participation.

Unfortunately, for a variety of reasons, within the DoD it can be much more difficult to self-select to participate. In defense work every hour is accounted for and must match a specific project plan line item. Community participation often requires a contributor to assist with things that don’t have an exact corresponding work breakdown structure element from the program that is paying them. In defense work, if you don’t have a charge code, you don’t work. There’s simply less wiggle room for participation that doesn’t directly relate to the program that is funding you at that moment.

We also touched on a bunch of other issues that impact the ability to participate in or contribute to open source projects. Things like export controls, copyright, culture, etc.

These specific issues that impact defense contribution and participation have broad implications if defense is to be able to effectively leverage the work going on in open source communities. One of the things that makes open source community tick is the right to fork. Knowing that you can fork the source if the project direction deviates from your own direction is important to alleviating risk. The antidote to forking is community participation and the development of trust. The more you participate, and the more you develop trust, the more you or your organization can influence the direction of a project or at least make sure that your specific needs can be met. With all of the rules that currently make meaningful participation difficult, it is very difficult for defense contractors to participate in the upstream software value chain. The result is perpetual forking.

It will work like this. A defense contractor does a trade space analysis and decides that they can save a lot of money for the government by using a particular open source project, so they include it in their bid. They win and they build the system using the open source component, however, they realize that they have to modify it in a few critical ways to satisfy some specific requirements. They can’t participate in the community so their changes never get offered back, and never make it into the trunk. A few years later, under a follow up maintenance and sustainment contract, they do an upgrade of the system and, because their changes never made it into the core project, they have to repeat the work again on the newest version of the open source project.

In the not too distant future there will probably be whole classes of software infrastructure that are effectively only available as open source. It simply won’t be economic for a proprietary software firm to compete in areas that have been completely commoditized. Therefore, it’s imperative that the Department figures out how to resolve the issues that are preventing their own people or their contractors from participating meaningfully in the communities that they will be forced to rely on.

That’s probably enough on that. There was one other thing I wanted to touch on in this post. This was the fourth year of this conference and, maybe I’m just an impatient person, but I’m getting really bored of the same old remedial conversations with a bunch of suits (full disclosure, I was in a suit too). Or as John Scott put it to me during a break, “Where the geeks at??” Too much of the conversation is still about whether or not Linux qualifies as CoTS in the FARS and that sort of thing. Where are the breakout groups on open geo tools? Where’s the presentation from the guys using XMPP as a cheap messaging stack in some major program? Where are the non-DoD geeks who are attending because they are participating in an open source community that was started in defense but is now being widely used to solve all kinds of other problems? Where are people trying to build an open service bus that will deal with intermittent service end points that you find on a battlefield? Where are the SOSCOE developers talking about how they used JXTA’s service advertising mechanisms? Etc…

It’s time to move from the basics into the advance course kinds of stuff; the stuff you talk about when you are actually doing it. It’s time for DoD policy makers and decision makers in key programs to really start to push; push for expertise, program outcomes, and key policy initiatives that will alleviate the kinds of road blocks we discussed (again) in our panel. In short, it’s time to stop talking about open source in defense and start using it at such a meaningful scale that next year the room won’t be full of suits, but will be full of geeks and practitioners.

• • •

Open Source in Defense: Consuming it is Nice, but Building it is Better

I’ll be participating in a panel discussion next week at the 4th Annual DoD Open Technology Conference in DC. The Panel is about open source software in defense and will be moderated by John Scott, an author of Sue Payton’s Open Technology Development Roadmap (pdf). Dan Risacher, who recently discussed the DoD CIO’s upcoming policy memo with GCN, will also be on the panel. We’ll be talking about what makes open source valuable to the department – consuming it, contributing to it, and even building it outright. We’ll also be talking about the policy, legal, and accidental process roadblocks that make it more difficult today than it should be.

Yesterday, while I was doing some preparation, I ran across this sources sought on Fed Biz Ops. It got me thinking about the down to Earth practical stuff that is necessary to make a difference in encouraging open source in defense. I am going to come back to the details of this sources sought in a moment.

The DoD has broken the seal when it comes to consuming open source, at least in packaged form. I’m not certain where I got this factoid, but I think the US Army is now Redhat’s single biggest customer. But like I’ve said before, consuming open source is no big deal and really isn’t occasion for a big celebration. Where the DoD stands to gain much more value is in producing open source software.

Every industry has at least some domain-specific software needs. The stuff that makes up their industry-specific “stack” and that isn’t readily provided in the cross-industry products from the major software vendors. For example, the financial services industry depends on things like high speed messaging buses and high availability transaction monitors. Web firms use things like Perlbal, Hadoop, and of course Apache that help them build a massively horizontally scaled web presence. Telecom has specifications like H.323 and now SLEE and SIP and products built on them.

In the old days, if the industry was big enough, domain-specific software vendors would spring up to provide them with the infrastructure that they needed (e.g. Tibco’s Rendezvous). If they were REALLY big, a large software vendor might even offer a domain-specific product, or at least a version of their product.

These days though there is an alternative, Open Source Quasi- Joint Ventures. Well, nobody really calls them that, but that’s how I think of them. They are like accidental joint ventures that do resource sharing the way a traditional joint venture would, but they rely on open source licensing to make the risk of participation low. Plus, they avoid most of the legal and ownership wrangling that happens in a real joint venture.

A great example of this approach is the Advanced Message Queuing Protocol (AMQP) (and associated AMQ implementations such as OpenAMQ). It was initiated by JPMorgan but has grown to include many large banks as participants. The banks don’t give up any competitive advantage by participating because messaging is about passing information to trading partners, but they save money by more efficiently providing for their own infrastructure.

Things like OpenID and Hadoop also fit into this mold. Companies like Yahoo and SixApart are taking active roles in funding and guiding the development of their industry-specific technology. Again, it’s far enough down in their stack that they aren’t giving up a competitive position; but they are saving money by sharing their development resources.

I don’t mean to say that these aren’t normal open source projects in every way. I’m simply making a distinction about how and why they are funded in terms of the specific needs of the industry that is funding them. By joining together to build components for an industry-specific stack and then intentionally commoditizing it within that industry, these projects seem to be filling in where JV’s or domain-specific software companies might have focused before. This open source approach is better than a traditional JV though because new participants can join up at any time and they avoid many of the up front issues of starting a JV.

Back to the DoD. Defense has saved a great deal over the last decade recognizing that it can leverage COTS hardware and software. However, it still has many unique needs for its information technology stack – the DoD operates in a different operational environment and has many specialized requirements. So, while the DoD today is beginning to consume “package” commodity open source projects such as Linux, there is still a great opportunity to steal a page from JPMorgan’s playbook and build defense-specific infrastructure as open source. The DoD builds defense-specific stack components all the time, but they rarely do it in a way that makes it easy for other programs to adopt them (or even know about them). An open source approach would better spread the funding and would also ensure that once the money is spent the pieces could be widely used and adopted.

This brings me back to the WebTAS sources sought.

WebTAS started life years ago to simplify the process of conducting data analysis that spanned database instances and DBMS’s. Over time it evolved to be something like generalized middleware and application framework for data analytics applications (and has been used in even more general applications since then). The sources sought I linked to describes basically a “business as usual” approach to continuing the program as it plans to support continued R+D of the core framework as well as for at least some of the analytics work that will be done with it (which is why top secret clearances are required).

It’s probably worth asking the question whether there is even still a need for a government funded program to build a database connectivity and analytics suite (especially for a program that is expected to cost as much as $300M – that’s almost 25% of the $1.4B value of the Linux Operating System!). A lot of time has passed since the program was started and there are many more commercial and open source technologies available in that space today than there were when WebTAS started. However, for the purposes of this discussion I’m going to assume that WebTAS is continuing to provide unique capabilities to meet DoD-specific requirements. However, with that assumption in place, I’m going to argue that WebTAS should be developed in the fashion of an Open Source Quasi-JV.

Because WebTAS was developed under government contract, it can theoretically be used in any government contract. The government could furnish it to a contractor as Government Furnished Equipment (GFE) for any program. However, in practice this rarely happens. In defense IT, infrastructure software tends to be used only on contracts delivered by the contractor that built the infrastructure. As an example of this tendency, all three of the projects mentioned by the sources sought, SWIC, PANACIA, and MAAP are built primarily by the same contractor that currently delivers WebTAS.

If the government is interested in getting more value out of their investments in infrastructure like WebTAS (and would like to quench the proprietary lock in business model that they are stuck with today) they need to take concrete steps. As the sources sought indicates, the contract is expected to have a five year term. Once that contract is issued, if an open source approach isn’t built in, there won’t be another opportunity to change the approach for five years.

So, it’s great that Ms. Payton’s office wrote the OTD Roadmap and that the DoD CIO is about to issue a clarifying open source policy. However, if I were Ms. Payton, I would take another step and have my staff directly engaging with program managers of programs like WebTAS to ensure they let contracts that would directly support OTD in general and open source in particular.

For example, why not define in the contract’s CDRL’s (basically the stuff that is delivered) that the vendor must establish and govern an open community and the associated code repository? Why not include award fee metrics that incent open community – stuff like the number of other government programs or contractors that are participating in the community and using the software for their projects in order to incent open community development and marketing? While using IDIQ style contracts, why not make the awards to a range of potential contributors so that the contract is positioned to support an eco-system of contributors and ensure that each of them understands the Intellectual Property approach that will be used? Why not split the contract to develop the infrastructure from the use of it for analytics work so that a wider group of contractors (that don’t have to take on the cost of TS clearances) can participate?

The goal should be to establish rules and incentives in the contract that encourage the development of widely available open software with an effective community.

Savvy contractors will realize on their own that taking the initiative to open source infrastructure like WebTAS themselves is good for business. Assuming the code is valuable, aggressively commoditizing it will contribute to wider adoption and more opportunity – after all, these are all services businesses. However, it’s early and we haven’t reached that tipping point yet. So, if the government wants to achieve real strides with open initiatives they need to do more than provide OTD policy guidance. They need to aggressively work programs like WebTAS to establish contractual terms and incentives that will push their contractors past the tipping point before another round of long term contracts freezes progress for half a decade.

• • •

Barcamp.mil Follow Up

barcamp_mil_logo

The inaugural barcamp.mil went down yesterday in Crystal City without a hitch and, at least for me, was a real blast. Thanks to John Scott and Mercury Computer Systems for providing space, pizza, and even some celebratory Tsing Tao beer. The turnout was good and we ended up with strong tracks in GIS and DoD open source as well as some great sessions on security (high assurance computing), cloud computing in DoD, and enabling innovation. There were probably others but being able to be in only one place at a time… The lunchtime session on evolving open source policy in the DoD was of particular interest. There are definitely people in the department that get it which is heartening.

DSCN0805.JPG

DSCN0798.JPG

DSCN0802.JPG

John and I both agree that we’d like to do more of these things; perhaps tied in with major conferences such as I/ITSEC, the annual DISA Partner Conference, or similar venues that draw DoD geeks together. If you are running a conference and you’d like to facilitate a simul-barcamp.mil let John or I know.

Some people are probably wondering how it is possible to mix the barcamp ethos with the defense space. All I can say is, you should check it out. Despite the bureaucracy we deal with in this space there is really cool work being done (and some of it can even be talked about). Beyond that, I believe there is a growing cultural gap developing along generational lines within the industry. We didn’t all grow up here and we are bringing our culture, values, and methods of working with us. From agile to open source the defense space is undergoing big changes. barcamp.mil is just another way for like minded people to connect, share ideas, and spread the culture virus.

DSCN0815.JPG



Industrial age mechanisms simply aren’t working anymore for the connected forces the DoD envisions. We have an opportunity to really make a difference in how tomorrow’s force is equipped, but even more importantly, how it works. Hey, it’s a Democratic Republic and it’s our military too.

code-for-defense

• • •

FCS, SOSCOE, and the Big Bang

It bums me out to read statements like this one in this article about FCS/SOSCOE:

The software program “started prematurely. They didn’t have a solid knowledge base,” said Bill Graveline, a GAO official involved in the government’s ongoing review. “They didn’t really understand the requirements.”



That isn’t to say that I think FCS/SOSCOE is on track and being developed the best way, it’s just that I think these kinds of statements perpetuate the idea that software of this magnitude should be written as a Big Bang after every requirement is fully understood. There are 3000 developers working nine years at a cost of $6B and the expected value curve is supposed to look like this:

Sudden_value_2

Contrast this with something like Linux where value has tracked much more closely with effort:

Gradual_2

Why the difference? Well, unlike an open source project like Linux that slowly moves its way up the food chain from departmental web servers to mission critical applications as it matures, the government acquisition system tends to assume that at 8 years 364 days SOSCOE is an entry in an earned value report. Then, suddenly as the calendar turns over 9 years it is hatched as a fully functioning completed system ready for operational deployment. In the meantime, as it hasn’t been “delivered” it isn’t available to be used anywhere.

I would love to see large scale software developments like this thought of in much more incremental / evolutionary terms. I’d also like to see a greater degree of transparency and openness so that incremental value could be provided along the way, even to completely unrelated programs. After all, many of the component parts of SOSCOE are lego blocks that could be readily used in other environments.

In fact, my first graph is probably completely wrong. Without the hardening that comes from incremental use it is much more likely that the budgeted 9 years / $6B grows dramatically (like it did for Vista) before the value bit can be flipped.

Building Open Source Software in the DoD

I had the opportunity to speak yesterday at the DoD Open Technology conference in DC. I proposed to the contractors in the room that they don’t need to wait for the government to force them to open code through initiatives like the Navy’s SHARE repository. I think they should improve their market positions by proactively using their copyright on source written under government contract and open it themselves. ITAR aside, opening it up and intentionally commoditizing it (ala IBM and the Apache web server circa 1997’ish) is a good business move.

I’ll go even further, I think there is a moral obligation to offer code written under government contract as a public good whenever possible (though sometimes with a temporal shift to the right). LIke the gallium arsenide chips in our cell phones, to the civilian nuclear industry fathered by Hyman Rickover’s nuclear Navy, there is lots of historical precedent for thinking this way (argue amongst yourselves whether Rickover’s legacy is a wholly positive one).

I’ll try to write more about this later but for now, if you are interested, you can take a look at reasonably self-explanatory slides below.

• • •

New Open Source Project Supports Contextual Collaboration

Cartoon

After about a month of preparation I’m thrilled to announce a new open source project to build support for contextual collaboration. The project is called rVooz (a contraction on rendezvous) can be found at www.rvooz.org.

From the rVooz web site:

rVooz is a software suite designed to make contextual connections, or “contextions,” between people who may or may not have a priori knowledge of each other. It is designed to bring people together even if they don’t have each other in their buddy lists or know each other’s phone numbers.

The rVooz suite consists of software clients that post context, a Salient Server which finds context matches, and Voozers that coordinate the connections by distributing presence or starting sessions…

To understand what this means, imagine looking at a web page and seeing all of the other people looking at that web page added to your IM client buddy list in real time (and removed when you leave). Or, in a military context, imagine that you are reviewing an airspace, a target, an area of interest, or some other context and all of the other operators working the same context (in whatever system they are using) are dynamically added to your buddy list. This is just the beginning, in addition to “contextual dynamic presence”, rVooz may also be leveraged to dynamically establish VoIP sessions without any of the parties knowing each other’s phone numbers or SIP addresses in advance.

The project is interesting as it may be the first free and open source project funded from day one by the Department of Defense (or it might not be, hard to tell!). It is also interesting because it has been selected to be as appealing to a non-DoD audience as it is to the DoD. If this turns out to be true (I am sure hoping so) it will open up a really interesting chapter of collaboration between two seemingly completely different domains.

The project is brand new but has the beginnings of the “Salient” back-end service in place and has started a “voozer” that works with the OpenFire Jabber/XMPP server. We’re hoping that as we continue to build out Salient the community will help us develop voozers for a variety of collaboration environments.

If it sounds interesting stop by www.rvooz.org and check it out.

• • •

Does an OSS project’s language need to be “cool?”

We are in the process of starting up a new open source project. I think we have a pretty cool idea and as soon as we can get our collaborative infrastructure set up we’ll be opening it up to community. We are prepared to do most of the development if necessary but a key measure of success for us with this project is community building. We are starting this project up on behalf of our DoD customer and both we and our customer believe that the project should be a demonstration of the power of community. Plus, we just think there will lots of cool things it will be used for if it is open source, and it might be a nice demonstration of government OSS as public good.



So… with that as context, last week we spent a few days kicking off the project and thinking through some key architectural issues, a development roadmap, and the like. In the process of discussing architecture it became clear that, assuming we wrote the service in Java, OSGi would make a really good framework for building it. If we weren’t doing an open source project, we’d be done thinking about it. We’d be moving forward on our first few sprints of work (at least) coding in Java in an OSGi framework. But…



…it is an open source project, and we care a lot about early and significant community involvement. So, how to consider the impact of language / framework selection on community? Does Java and OSGi make this too “enterprise integration” in feel and potentially distance us “spiritually” from important potential contributors?



In some of our other open source projects, JBI ESB components that are firmly in the “enterprise integration” world, this wouldn’t be an issue. Developers in that world would be comfortable in enterprise Java and would probably be eager to incorporate OSGi into the mix. But what we have in mind is not really in the “enterprise integration” space (at least not in my mind). In my view this project will appeal much more to the constituents of “web”, “VoIP”, and “collaboration” worlds. Outside the DoD, I think the community for a project like this is likely to be more “Web 2.0 Summit” or “BarCamp” than “Gartner Enterprise Integration” – though I can imagine at least a little bit of “Office 2.0” or “Enterprise 2.0” zeitgeist mixed in.



So, other than a bunch of gut feel BS, how much does language selection influence audience and community? That is the question we are wrestling with.

• • •

Another Business Model for Software Companies?

I attended a meeting last week with Peter Bostrom, BEA’s Federal CTO. During a discussion of open source in the DoD, he made the valid point that integrators aren’t the best suited for developing large scale software products. They don’t have the product management capability, standards body interaction, versioning expertise and all of the other product-oriented DNA that is necessary to effectively develop software products.

I’ve posted before that I think widely used infrastructure projects like the Army’s SOSCOE (the software underpinnings of Future Combat System) should be developed under a “funded open source model.” In the future I would like to see contracts written that still expect delivery of functionality within a particular time frame; however, with additional deliverables that require the contractor develop and host an open source community around the product.

Today traditional software product companies like BEA participate in DoD software projects rather passively. As subcontractors they do little more than provide software licenses (and perhaps some focused integration expertise) to the integrators. Would it be reasonable to believe that they could alter their business model to un-bundle their software development expertise from their products and in the future sub to the integrators not as a mere license provider, but as the funded open source developer? In other words, could a company like BEA be hired to be the JBOSS for SOSCOE or other infrastructural products?

I really can’t say whether this model is workable (especially for firms like BEA who would have to change culturally to be successful in open source development). But I think the continued erosion of sales of proprietary middleware is inevitable and a defensive posture just isn’t going to work in the long run. Better to go on the offensive and try things that leverage the core strengths of the firm than sit back building a Maginot Line (unless the prospect of Vichy politics suit you).

• • •

OSCON 2007 Wrap Up

If you weren’t able to attend OSCON in Portland two weeks ago you might enjoy some of the links here. Most or all of the presentation materials are online and at least the keynotes are available as video.

My favorites include Simon Wardley’s “Commoditisation of IT…” (you’ll need to download the slides to be able to follow his jokes, the slides aren’t visible in the video), Steve Yegge’s extemporaneous slide-free riff on “How to ignore Marketing and become irrelevant…“, Robin Hanson’s “Overcoming Bias“, and finally… my absolute favorite, James Larsson with “Pimp my Garbage.”

Lots of people seem to be watching this one too. Simon Peyton-Jones talks about Hascall and it’s applicability to parallel programming.

• • •