(created 1/20/2007)

I’m sitting in the middle of Missouri, on a visit with my son. And it’s snowing.

Now some of you probably know I live in the Silicon Valley, where we seldom see snow. (Well, last year, but that was just a mean, spiteful, ice-pellet type of snow, out to do mischief to our wonderful freeway system <sarcasm intended>).

This is a light, fluffy snow, wet enough to outline the trees, dry enough to blow around and fill the hollows.

I was struck by the concept of snow, as a metaphor for standards. Every flake is different (so they say, I haven’t personally checked every flake), yet every flake works together to create a unified, encompassing structure.

Given the announcement of the merger of FSG and OSDL, this seemed to be particularly apt. The effort to encompass the wideness of Linux distributions (or GNU/Linux distributions for the FSF crowd), with their infinite variety, seem to be as futile as the concept of individual, infinitely variant crystals covering the ground. And yet, they do, and they are.

Those of us who remember the UNIX days know firsthand the risks of variant operating environments with no easy interchange. We saw firsthand the risks of clumping only to our own, as if we cover the world with only one kind of crystalline form. Hindsight again, but hopefully we learned from it.

Linux faces a similar challenge. As with UNIX, the potential of Linux distributions is infinite. (Anyone know the number of Linux distributions around these days?). Yet the base ability to exchange, interchange and work together allows Linux to cover more ground than an equivalent number of silo focused versions.

The issue that FSG and OSDL need to drive is how to insure that development and deployment can become independent of the political and religious wars that often happen in operating systems. I (and my company) really want to build once and QA everywhere, since run-everywhere is not actually a feasible commercial strategy. Today, I’m still in the build for this major distribution, then rebuild for this one; leading to being lost in a code management maze of twisty little passages, all alike.

I for one have long applauded the efforts of the FSG, including being an independent member, and driving the companies I work with to join as it made sense. I similarly applaud the efforts of OSDL in defining a centralized view of the Linux world in thought, code and technology.

I only hope that the combination can become even more powerful, and help create the unified, varied shapes that will help cover the world.

As always, comments welcome

Advertisements

Back in those exciting days when evangelizing Linux and open source was still exciting, I almost got lynched for a simple statement at a Linux conference. The statement: “Linux will only be important when no one cares“.

Hindsight being what it is, I’d like to think I was right, at least on servers, in embedded devices. Let’s hold judgement on desktops for a moment.

Linux is now just expected. Kind of like VMS in the 80’s, Windows in the 90’s, if you aren’t doing Linux, you are ignoring a significant and increasing part of the market.

So what, you say? (And you’d be right… see, no one cares).

Well the issue facing Linux is a new and increasing confusion on what Linux is. In the last year as a consultant, I’ve met with 5 companies who wanted to know: 1. which Linux to develop for? and 2. how to get out of the loop on maintaining multiple, development-incompatible Linux flavors.

Well, there’s no good answer. We still have the Linux community (the last remaining “cares” group ready to extol the glories of their favorite distribution, be it Red Hat, Novell, Ubuntu, or my choice for a new distribution “Britney Spears Linuxthe distro with nothing hidden. It’s not trivial for companies, especially small companies, to pick and create support for all of the possible choices, nor does the current “standards” cover all the possible contingencies. (But at least the FSG is trying). BTW, big companies face the exact same challenge; they may have more resources to throw at the problem.

Already, the picket signs are up. “Just make it open source and we’ll do the rest”. Yeah right…

Believe it or not, it costs real money to release a product to open source. It can cost real money to contain potential damage from bad implementations. or instance, say I release a device driver for a new storage device. I open source the Red Hat version of the driver, and it gets ported off to my BSpears Linux. Some unnamed company decides to use the driver and my hardware device to store their customer database, in spite of the “not supported” comments. When the device hits 80% full, the ported driver has a seizure and crashes, taking all the data with it. Guess who gets blamed… it ain’t Britney.

So, what’s the right play here? Should I go with the Red Hat dominant market share, the Novell (a newly-indentured servant of Microsoft), some version that no one in my market uses? Should I focus on Germany (one answer), Japan (different answer) or the US?

A long while back (in my SGI days) I came up with a handful of questions we asked groups when they wanted to release somethng into open source. I should dig them up and run them as a blog sometime.

So, what’s the hindsight going to be in 2010?

as always…

With few doubts, the buzz on open during the last few years has obscured the basics of openness in a wash of white noise. Please note this is not just a concept of open source (though open source plays a role in openness; it is about a broader concept of open).

Openness in technology is a component of interactions. In short, a conversation between applications (interoperability), within components (interfaces), or even between the organic side and the silicon side (user interactions) requires an open. For open source, this conversation may include the ability to modify the conversation in ways unforeseen by its instigator. In short, being open really evolves to allowing access to the information necessary to take an appropriate action, or on accommodation, as in the ease in adapting to changes on either side.

So let’s consider some of the degrees of openness.

First, any conversation on open can be traced to currency. (I can hear the screams now…)

Currency doesn’t necessarily mean government-issue cash. Back in the early days of the (now historic) Open Source revolution, I was often asked (usually by executives in big companies) how Linux development got paid for. The answer, easily enough in the early days, was ego-dollars. Developers got to see their creations used by lots of people, got kudos for good code and lost value for bad code. Now it is often those same companies paying employees to extend and enhance open source code.

So, for a corporation, openness can be a good thing (expand reach, expand share) or a bad thing (devalue products, reduce profit). However in most cases, openness as a communication vehicle is of benefit to everyone. Imagine having to purchase a phone for each telephone network that your friends might have. We enjoy having the ability to plug our coffee pot into an available plug.

Similarly in technology, openness helps delineate how we connect. While it may extend into visibility into the implementation, source is in itself not a communication necessity. With exceptions, most of us don’t know or care how the electricity is generated; we care that we have electricity. Neither do most of us care about the choice of programming language, programming style, or reuse of the code.

Openness can exist in many layers, but for shortness, I’m going to break this into some subsets.

1. Programming Interfaces: By making the communication conduits and language (values) available, programs can implicitly exchange information and interoperate. This does require a level of trust in the implementation, since what the program does is hidden. APIs are usually a one sided affair, changes can occur without regards to impact.

2. Specifications: Often you can find specifications that are available without business restrictions, from which you can build a product to manipulate or interchange. For example, back in 1999, SGI released the specification of XFS to allow developers to understand the technology, as well as develop to it. However, specifications can come in two basic flavors: read-only and read-write. Read-only limits the changes to the originating organization without allowing outside input. Often defacto standards fall into the realm. Read-write allows community input.

3. Standards organizations: The nice thing about standards is there is always one to do what you want. The downside is that there are innumerable standards bodies, from industry, through national to international, covering a multitude of arenas with non-standard ways of determining what and how to create a standard. This class falls in dejure standards.

4. Open Source: Obviously the most open way of communicating is to determine both the content and the intent of any message. By allowing view (and modification) of source, open source delivers a level of openness found in no other layer. However, standardization in open source is only driven by the will of the community.

Each of these has strengths and weaknesses, pro and con arguments. As we move forward, we’ll delve into each of these.

I want to close on my new favorite openness quote, from Arthur Kantrowitz in “The Weapon of Openness” :
“When technical information is classified,
public technical criticism will inevitably degrade
to a media contest between competing authorities.”

As always, comments welcome.