(created 1/20/2007)

I’m sitting in the middle of Missouri, on a visit with my son. And it’s snowing.

Now some of you probably know I live in the Silicon Valley, where we seldom see snow. (Well, last year, but that was just a mean, spiteful, ice-pellet type of snow, out to do mischief to our wonderful freeway system <sarcasm intended>).

This is a light, fluffy snow, wet enough to outline the trees, dry enough to blow around and fill the hollows.

I was struck by the concept of snow, as a metaphor for standards. Every flake is different (so they say, I haven’t personally checked every flake), yet every flake works together to create a unified, encompassing structure.

Given the announcement of the merger of FSG and OSDL, this seemed to be particularly apt. The effort to encompass the wideness of Linux distributions (or GNU/Linux distributions for the FSF crowd), with their infinite variety, seem to be as futile as the concept of individual, infinitely variant crystals covering the ground. And yet, they do, and they are.

Those of us who remember the UNIX days know firsthand the risks of variant operating environments with no easy interchange. We saw firsthand the risks of clumping only to our own, as if we cover the world with only one kind of crystalline form. Hindsight again, but hopefully we learned from it.

Linux faces a similar challenge. As with UNIX, the potential of Linux distributions is infinite. (Anyone know the number of Linux distributions around these days?). Yet the base ability to exchange, interchange and work together allows Linux to cover more ground than an equivalent number of silo focused versions.

The issue that FSG and OSDL need to drive is how to insure that development and deployment can become independent of the political and religious wars that often happen in operating systems. I (and my company) really want to build once and QA everywhere, since run-everywhere is not actually a feasible commercial strategy. Today, I’m still in the build for this major distribution, then rebuild for this one; leading to being lost in a code management maze of twisty little passages, all alike.

I for one have long applauded the efforts of the FSG, including being an independent member, and driving the companies I work with to join as it made sense. I similarly applaud the efforts of OSDL in defining a centralized view of the Linux world in thought, code and technology.

I only hope that the combination can become even more powerful, and help create the unified, varied shapes that will help cover the world.

As always, comments welcome

peac-cat-poster-01.jpg

“When programmers on the Internet can read, redistribute and modify the source for a piece of software, it evolves. People improve it, people adapt it, people can fix the bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing.” – The Open Source Initiative

The above shows why people care about open source, from a technical viewpoint. What’s missing are the companion thoughts that should go into the value of open source.

First, let’s review:

Open source rules are, simply enough:
Allow Free redistribution.
Allow Source code access.
Permit derived works.
Protect the integrity of the author’s source code.

These are pretty darn clear. (And debatable, but that gets us into licensing hell).

So what are the rules for using open source from a corporate view?

They are even easier.

Give credit where credit is due.

If you use someone else’s program, work, code fragments, or anything of the ilk, give them credit. This actually should go beyond open source; it’s a good rule for life. For example, when I was back at Cassatt Corporation, we built a complex automation control system, which makes use of other programs, and thus we created a “thanks to” page. (Don’t ask why it’s in the legal section). If you notice, it’s lengthy, and yet clearly recognizes that using open source for development, for delivery and so forth, reduces Cassatt’s industry-leading datacenter automation tools development complexity.

Return value equivalent to what is received.

Note, this doesn’t say, “Open source your products” or “All your base are belong to us“. This means if you get something, give something back. It can be in recognition, in other projects, in hosting (live or web), or opening other code. At SGI when we decided to adopt Linux, we also decided to identify and release code that would add value, such as XFS. We didn’t open IRIX (the operating system) since it was encumbered with OPS (Other Peoples Stuff), like trademarks, copyrights, and licenses.

Today, many, many companies make use of quality code that they received from what was, in end view, an open source effort. Let’s hope that somewhere, the non-zero-sum game that makes up open source usage is being played both competitively and cooperatively.

As always, comments welcome.

Over the past few years, many companies have decided, for various reasons, to release programs into the wild world of open source. Some few have done it well, more have flopped.

So, in the interest of goodness, I’ve decided to drop my list of things to think about, and hopefully get some feedback on what other things should be included, ignored, expanded; you get the idea.

So, with litle ado.

Open Source Decision Criteria
1.0 Does Anyone Care?
1.1 Who cares internally
1.2 Who cares externally

2.0 What is the reason for open sourcing this?
2.1 Increase business
2.2 Drive new development
2.3 be a Good citizen

3.0 Who owns the project
3.1 Who is the gatekeeper
3.2 How are developments accepted and merged
3.3 Who will host this project

4.0 Open source isn’t free.
4.1 Is the company willing to pay the tab
4.1.1 Engineering?gatekeeper/Integration
4.1.2 Marketing/Promotion
4.1.3 Legal

5.0 What are the legal issues
5.1 Is the code IP clean (or acceptable)
5.2 Is the code dependent on other products
5.3 What license is being considered
5.3.1 is there a dual use license scheme available

6.0 What shape is the code in
6.1 Are you willing to have others review the code
6.2 What’s the current known buglist like
6.3 How long would it take an average programmer to get up to speed on the code
6.4 Is there any documentation

7.0 What’s unique to this program
7.1 What are the unique features
7.1.1 and again, who cares (and why)
7.2 What other projects are targeting/delivering this
7.2.1 Would you consider using one of them if your own project didn’t exist
7.2.2 Have you had discussions with any of them on merging

8.0 Does the product have commercial iinterest
8.1 Will you create competition
8.2 Is there a potential SAAS or services tie-in
8.3 Will you continue to offer the project directly as a commercial entity
8.3.1 Have you thought (again) about dual license
8.3.2 Do you plan to extend the capability outside of the open source arena

Taking a product/project from a closed world to an open one is never easy or trivial. If you can’t at the least answer the above, don’t bother going there.

Also, I suspect there are many things that should also be considered… let me know.

Back in those exciting days when evangelizing Linux and open source was still exciting, I almost got lynched for a simple statement at a Linux conference. The statement: “Linux will only be important when no one cares“.

Hindsight being what it is, I’d like to think I was right, at least on servers, in embedded devices. Let’s hold judgement on desktops for a moment.

Linux is now just expected. Kind of like VMS in the 80’s, Windows in the 90’s, if you aren’t doing Linux, you are ignoring a significant and increasing part of the market.

So what, you say? (And you’d be right… see, no one cares).

Well the issue facing Linux is a new and increasing confusion on what Linux is. In the last year as a consultant, I’ve met with 5 companies who wanted to know: 1. which Linux to develop for? and 2. how to get out of the loop on maintaining multiple, development-incompatible Linux flavors.

Well, there’s no good answer. We still have the Linux community (the last remaining “cares” group ready to extol the glories of their favorite distribution, be it Red Hat, Novell, Ubuntu, or my choice for a new distribution “Britney Spears Linuxthe distro with nothing hidden. It’s not trivial for companies, especially small companies, to pick and create support for all of the possible choices, nor does the current “standards” cover all the possible contingencies. (But at least the FSG is trying). BTW, big companies face the exact same challenge; they may have more resources to throw at the problem.

Already, the picket signs are up. “Just make it open source and we’ll do the rest”. Yeah right…

Believe it or not, it costs real money to release a product to open source. It can cost real money to contain potential damage from bad implementations. or instance, say I release a device driver for a new storage device. I open source the Red Hat version of the driver, and it gets ported off to my BSpears Linux. Some unnamed company decides to use the driver and my hardware device to store their customer database, in spite of the “not supported” comments. When the device hits 80% full, the ported driver has a seizure and crashes, taking all the data with it. Guess who gets blamed… it ain’t Britney.

So, what’s the right play here? Should I go with the Red Hat dominant market share, the Novell (a newly-indentured servant of Microsoft), some version that no one in my market uses? Should I focus on Germany (one answer), Japan (different answer) or the US?

A long while back (in my SGI days) I came up with a handful of questions we asked groups when they wanted to release somethng into open source. I should dig them up and run them as a blog sometime.

So, what’s the hindsight going to be in 2010?

as always…

With few doubts, the buzz on open during the last few years has obscured the basics of openness in a wash of white noise. Please note this is not just a concept of open source (though open source plays a role in openness; it is about a broader concept of open).

Openness in technology is a component of interactions. In short, a conversation between applications (interoperability), within components (interfaces), or even between the organic side and the silicon side (user interactions) requires an open. For open source, this conversation may include the ability to modify the conversation in ways unforeseen by its instigator. In short, being open really evolves to allowing access to the information necessary to take an appropriate action, or on accommodation, as in the ease in adapting to changes on either side.

So let’s consider some of the degrees of openness.

First, any conversation on open can be traced to currency. (I can hear the screams now…)

Currency doesn’t necessarily mean government-issue cash. Back in the early days of the (now historic) Open Source revolution, I was often asked (usually by executives in big companies) how Linux development got paid for. The answer, easily enough in the early days, was ego-dollars. Developers got to see their creations used by lots of people, got kudos for good code and lost value for bad code. Now it is often those same companies paying employees to extend and enhance open source code.

So, for a corporation, openness can be a good thing (expand reach, expand share) or a bad thing (devalue products, reduce profit). However in most cases, openness as a communication vehicle is of benefit to everyone. Imagine having to purchase a phone for each telephone network that your friends might have. We enjoy having the ability to plug our coffee pot into an available plug.

Similarly in technology, openness helps delineate how we connect. While it may extend into visibility into the implementation, source is in itself not a communication necessity. With exceptions, most of us don’t know or care how the electricity is generated; we care that we have electricity. Neither do most of us care about the choice of programming language, programming style, or reuse of the code.

Openness can exist in many layers, but for shortness, I’m going to break this into some subsets.

1. Programming Interfaces: By making the communication conduits and language (values) available, programs can implicitly exchange information and interoperate. This does require a level of trust in the implementation, since what the program does is hidden. APIs are usually a one sided affair, changes can occur without regards to impact.

2. Specifications: Often you can find specifications that are available without business restrictions, from which you can build a product to manipulate or interchange. For example, back in 1999, SGI released the specification of XFS to allow developers to understand the technology, as well as develop to it. However, specifications can come in two basic flavors: read-only and read-write. Read-only limits the changes to the originating organization without allowing outside input. Often defacto standards fall into the realm. Read-write allows community input.

3. Standards organizations: The nice thing about standards is there is always one to do what you want. The downside is that there are innumerable standards bodies, from industry, through national to international, covering a multitude of arenas with non-standard ways of determining what and how to create a standard. This class falls in dejure standards.

4. Open Source: Obviously the most open way of communicating is to determine both the content and the intent of any message. By allowing view (and modification) of source, open source delivers a level of openness found in no other layer. However, standardization in open source is only driven by the will of the community.

Each of these has strengths and weaknesses, pro and con arguments. As we move forward, we’ll delve into each of these.

I want to close on my new favorite openness quote, from Arthur Kantrowitz in “The Weapon of Openness” :
“When technical information is classified,
public technical criticism will inevitably degrade
to a media contest between competing authorities.”

As always, comments welcome.