Friday, December 21, 2007

Happy Holidays!

Hope you, your family, and all your social networks, have a wonderful holiday season.

My blog rate might be a bit irregular this next couple of weeks.


Thursday, December 20, 2007

Is just enough still too much?

In How much OS is just enough?, Jay Lyman is reporting on the recent increase in interest for distros tuned for virtualized environments.

In addition to Ubuntu JeOS, he also mentions rPath and the recently announced plans for the Red Hat Appliance Platform as items to watch.

The Ubuntu JeOS 8.04 (Hardy) release ISO image is 130MB. This file contains a manifest, in case you're interested in what goes into JeOS. It seems to contain a few package not really necessary for a virtualized environment. (is wireless necessary?)

A brief scan through the rPath prebuilt appliances shows roughly comparable sizes. The ISO image is hardly a definitive measure of the final footprint, but it puts you in the ballpark.

While was at it, I decided to take a look at the sizes associated with EC2 instances. The small instance comes with 160GB of storage.


I'd much prefer a smaller base - perhaps I'm being a bit fussy.

With a little luck, we might even see another entrant into the arena...

Wednesday, December 19, 2007

Or you could pick of the telephone and call someone...

Om Malik chimed in on the topic of Twitter outages with Twit This: Fame Increased Twitter Downtime.

The key topic is that the rise in popularity of Twitter coincides with the increased number of outage.

This initially caught my attention because of the recent conversation on Premature Scalaculation.

While pondering Twitter's predicament of success, I was struck by an interesting observation.

Few people seem to be bothered by the outages. While I use twitter on a regular basis, the outages were mere inconveniences. While I have no doubt a serious degradation in the service would cause me to migrate elsewhere, I have no issues with the fact that the service is somewhat flaky. In fact, the associated outage screens seem to humanize the experience. No service? No problem - we're presented with a cute little graphic to acknowledge the fact.

The network wires around the problem. What did I do while Twitter was down? In some cases I used other communication channels. In other cases I simply did without. This is easier dealt with when the producers and consumers are humans. Having said this, it reminds me of the dialog I had with users when doing interviews for disaster recovery planning several years ago.

While there are many mission critical services deserve a sophisticated level of availability baked into the architecture, there are times when the most appropriate solution might be a system of post-it notes and telephones.

Heresy? Only sometimes.

Tuesday, December 18, 2007

Happy Perlday to You, Happy Perlday to You!

chromatic posted an article on the triple Perletary alignment that occurred today.

Parrot 0.5.1 and a Surprise for Perl's 20th Birthday
Happy birthday!!

Sunday, December 16, 2007

ok, so maybe we're at aloofix v2.0

Here's another in the Aloofix series.

hah! Big surprise! Another workbench change!

So here's the story,

I've been noodling with several different distro development tools. I like several of them, but each has at least one attribute or another that drives me nuts.

This brings us to...

Version 0.8 of my workbench.

I ditched the distro builders altogether, opting for a 'hand-rolled' distribution.

To be sure, this is most likely how the other distribution building tools were created. *sigh*

As of this evening, I now have a bootable CD, a set of installation scripts run from the CD, and a HD that boots a basic distro. It's raw, but it works.

The HD distribution currently contains the following packages:

  • linux-2.6.23
  • glibc-2.7
  • busybox-1.8.2
I'll spare you the nigglies in the TODO list.

The CD image is 20MB. Approximately 16MB of this is a gzipped cpio payload for the HD installation. The vast majority of the payload is glibc.

Will there be a version 0.9 of the workbench? I sure hope not. With any luck, future revisions will simply be improvements in the level of automation for creating the images. Having said this...

For what it's worth, I did get a complete LAMP stack build using the previous workbench, but I wasn't in the mood to man-handle uclibc to get some of the more interesting scripting languages up and running.

Saturday, December 15, 2007

Computer history makes me feel old

For those out here into computer history,

kdawson posted a link on Slashdot that points to YouTube - Computer History Museum Channel.

The Youtube channels contains a couple dozen videos from events and lectures at the Computer History Museum in Mountain View CA.

I'm starting to feel old. :-/

Friday, December 14, 2007

Thursday, December 13, 2007

Just a dumb blugger

Evidently blogging is ruining society.

Platypus Matt posted a link to a news article about the content of the acceptance speech from Doris Lessing, this year's winner of the Nobel Prize in Literature.

Stop Blugging You Idiots

I understand what she's trying to say, but I was stunned at some of the statements made in the acceptance speech.

I would add some coherent and insightful commentary on the speech but I think I've spent too much time blogging and blugging etc.

Wednesday, December 12, 2007

Architecture as Jazz?

Gavin Terrill has posted a nice summary of the recent premature scalability conversation in Big Architecture Up Front - A Case of Premature Scalaculation?.

I'm only mildly annoyed at the implication in the title... ;-)

No, not the second part... the first part. :-O

Any serious annoyance on my part is tempered by an acknowledgment that the canonical view of architecture is as a harbinger of large things - the choreographers for A Herd of IT Elephants.

This is truly unfortunate.

Often justified, but unfortunate.

The good news is that the big-bang, top-down approach is marked for death.

Care to hazard a guess as to why?

<longpause />

... because it doesn't scale!

This is not to say that big-bang, top-down is always inappropriate, but it's readily apparent that it tends to create non-negotiable, immutable structures.

Sadly, it will probably take years for Architecture to shed the historical dogma of Big.

The irony of the scaling topic and BAUF is that a considerable amount of my tenure as an architect has been focused on curbing some of the natural tendencies of IT organizations to create big things - big things that inherently resist scaling.

Similar to how building architecture and musical styles shift over time, our Architecture discipline seems to be modulating to a form of minimalism/JIT.

Who know, maybe we'll see a time when admitting to improvisation isn't guaranteed to raise a few eyebrows...

Tuesday, December 11, 2007

The Simpsons and Portland

I moved to the Portland Oregon area late last year.

Consequently, I'm still learning interesting factoids about the area.

Today, a co-worker mentioned something about The Simpsons and Portland street names. She wasn't completely sure, so I did a little research.


Here you go.

The Simpsons Archive: Who's Who? In Springfield - The Portland, Oregon ConnectionTM

Some are a bit tenuous, but what the hey.

I'll never look at Terwilliger Boulevard the same way again. :-)

Monday, December 10, 2007

A Face Only a Mother Could Love

Robert Scoble has started an Enterprise Software Foodfight.

The core topic is based around the question of why enterprise software is not well covered by bloggers and journalists.

It looks like he struck a nerve.

I've been mulling over the topic. Given that I spend my days with enterprise architecture, I even considered my own stance on blogging about enterprise software.

There are so many potential reasons. I'm guessing the reasons vary for each blogger.

Perhaps it's because many technical people see enterprise software every day at work, and long to see something with more hope.

Perhaps it's because of the age-old problem of reporting on the hand that feeds you.

Perhaps it's because many (most?) enterprise software vendors don't understand how to operate in the world of blogging. Blogging begets blogging. Press releases beget yawns. Most enterprise vendors are still struggling to internalize the read/write web.

Perhaps it's because readers don't want any more input about products from vendors already bombarding us with information and awareness of the products.


Perhaps it's because a considerable amount of enterprise software has a face only a mother could love.

Just a thought...

Sunday, December 09, 2007

Aloofix 0.1 (or whatever it's called) lives!

In Building a builder for tiny lamps, I described my progress creating an environment for experimenting with Pile of Lamps.

That article left off with plans to build a working default distro with T2, then create a new minimalist target configuration.

Well, I did manage to get a working default distro.


On to creating my own target definition.


I created a new target, started the "death by build iterations", then ran into problems. I'll spare the ugly details, but suffice it to say I spent more time surfing through piles of shell build scripts than creating an actual distro.

This brings us to version 0.7 of my workbench. :-)

I'm still running VirtualBox, but am now using Buildroot, from the uClibc folks, for the toolchain. I used uClibc and BusyBox years ago, and was contemplating its use for this project anyway.

As an added bonus, the builds take considerably less time than with T2.

So where am I at now? Well, I have a booting CD image and am working through the details of turning it into an installation disk.

The CD iso image is just over 19MB, with an installation payload. I have plans to make it smaller. At around 4MB, the kernel modules in /lib/modules are a notable contributor to the size. The virtual machine environment provides a predictable list of required drivers, but I need to go through the exercise of trimming down the list in the kernel configuration.

As for problems, the only one I've run encountered thus far is the fact that Buildroot doesn't include a boot loader in the list of packages available for target environments. Hmm... It's intended for embedded environments, so I'm only mildly surprised. I managed to shoehorn in a statically compiled version of grub, so it's no big deal.

I have a preliminary root filesystem for the hard drive installation. It still needs work, but it's enough to boot from a virtual disk image in Virtualbox.

The current distro installed on a hard drive contains the following primary elements:

  • Linux 2.6.23 (need to bump to latest patchlevel)
  • uClibc 0.9.29
  • Busybox 1.7.2 (not sure why it's not at 1.8.2)
Ok, so here's the latest plan of attack.

The first order of business is to convert my hard drive installation notes into a script that runs from the CD boot.

I've started to add some additional packages.
  • openssh
    installed, but it dumps core - investigating...
  • lighttd
  • and a scripting language
    (sadly, the perl port is very minimal - will need to ponder)
  • and a database
    (still researching - berkeleydb and sqlite recipes are included in buildroot - will need to ponder)
As I mentioned earlier, I want to scrub the kernel configuration to remove unnecessary drivers and whatnot.

I'm also considering trimming down some of the BusyBox applets enabled by default. This is not to reduce the size - it's more to reduce the number of moving parts. E.g. fdformat and unix2dos probably aren't necessary. It's questionable whether the filesystem creation utilities are needed as well. The original concept was to only provide enough to perform the task at hand.

With luck I'm hoping my next status update will report an alpha release CD-based installer that produces a ready-to-use minimal LAMP instance. Fun stuff...

I haven't yet decided on a name for the distribution. The tentative name is aloofix. I've love to hear recommendations for a better name.

More to come...

Until next time...

Saturday, December 08, 2007

Indexed to Sell

Most bloggers are familiar with gapingvoid.

Those that emjy the business card medium might Jessica Hady's work with index cards at Indexed.

She has a knack for the humorous chart and graph.

This one caught my eye.

I'm sure the fact that I'm trying to sell a house in Arizona was purely coincidental.

One year and counting. :-|

Friday, December 07, 2007

Thursday, December 06, 2007

Cue Hard Driving Rock Beat

A video article from CNET, Skywalker Sound secrets, got me wondering.

What would happen if I added a jamming sound track to the presentation at my next architecture pitch session?

It might even save me the need to stand up there and talk about it. Just cue the slides and let the music do the rest.

I'm oh so tempted.

/me adds a can of wet dog food to the grocery list

Tuesday, December 04, 2007


I think I've struck upon a solution for the perennial problem of creating fresh new blog content.

I call it blogging2.0.

It will leverage2.0 the latest trend2.0 by respinning2.0 everything2.0 as fresh2.0 and modern2.0. Everything2.0 I write2.0 will, by definition2.0, be new2.0. No longer will I2.0 need2.0 to be worried2.0 about use2.0 of the term2.0 2.02.0.

What happens2.0 when everyone2.0 starts to mimic2.0 me2.0?

Not to worry3.0. I'll simply change3.0 to keep up with the times3.0.

I'm sure there are a few kinks to work out. Perhaps I should call it blogging2.0-beta.



It's been an odd day. I woke up feeling nauseous but pressed on. Cracking open the car windows on the way to work seemed to help. Fresh Oregon air seems curative.

Getting to work, a co-worker and I walked over to Starbucks for a fresh cup of coffee, one of the universal remedies. It didn't help. After sending out a few email messages I went home for the day.

Upon arriving home, my wife fixed me a bagel and a 7-up. I finished those and went to bed. I fell sound asleep, waking up in the late afternoon.

Still disoriented from the mid-day sleep, I decided to read some blogs and formulate my daily post. My brain was still in a fog. My queasy stomach soured my frame of mind.

Nothing in my feed reader grabbed my attention. I was contemplating skipping the daily blog post.

Besides, it's not like there are legions of readers waiting for my next earth shattering declaration. Why am I even worrying about writing on a regular basis.

My funky day was starting to get the better of me.

Then I noticed an article.

In The hardships of being a nobody 2.0, Seth Eagelfield reminds us that A-list status is relative. It's not the numbers that matter, it's the fact that we select what we read and others select to read our work. It's all good.

If you like writing, Seth's blog provides a nice dose of micro-prose on a regular basis. It's good for what ails you.

And yes, I too couldn't resist tacking on the 2.0 doodad. Seth apologizes for his use of the suffix. Not me. I'm contemplating an all 2.0 blog posting. News at 11.

and now I'm going back to bed, hoping to wake up tomorrow ready to get back on track

Monday, December 03, 2007

Scalability is NOT an Optimization

In Is premature scalation a real disease?, Todd Huff points to an article from Dharmesh Shah, Startups and the Problem of Premature Scalaculation.

The heart of the conversation is a question regarding how much attention should be paid to scalability when in the early stages of a startup. Dharmesh suggests not worrying about scalability too early. Todd reminds us that scaling is no longer the exotic knowledge of yesteryear and that the travesty of focusing purported precious resources on scaling is an overstatement.

To be fair, Dharmesh is not proposing that problems of scaling be ignored. Rather, he's recommending people avoid prematurely optimizing for scale too early in the process.

It is indeed a delicate balance, as are all interesting problems in architecture and design. Besides, we've all grown up with the warning to avoid premature optimization. It's been hammered into our brains.

Here's my problem.

It's a fundamental mistake to frame scalability as an optimization problem.

Scalability fall into the non-functional requirements bucket. It keeps company with a shady cast of characters - security, maintainability, usability, and all the other *ilities.

The primary challenge with non-functional requirements is they tend to pose the risk of significant rework if not taken into account early in the architectural and design phases of a projects. This is where the real skill comes in. If you're in a waterfall mode, you can hope you do an effective job eliciting an accurate picture of the non-functional requirements. If you're in an agile mode, you can hope you do an effective job refactoring the code as you evolve the idea. In both cases, the primary goal is to avoid the decision of whether to implement dramatic amounts of rework or whether to scuttle the ship.

If a particular operation needs to complete in less than 3 seconds and the initial implementation takes 30 seconds, this is not a problem of optimization - something is flawed. To be sure, you might be able to rationalize that future improvements will shave it down to 3 seconds, but most audience members would suspect breakage rather than a lack of optimization.

If a web service is targeted for a million users, the basic framework must be capable of evolving from the initial user base of two. The design is fundamentally lacking if one cannot provide a rational roadmap between these two numbers.

Optimization seldom crops up as a non-functional requirement, except in cases where initial performance is disappointing. The same cannot be said for scalability.

Ok, here's one more way to illustrate the point.

Fail to factor security into a design. Go ahead, I dare you.
Fail to factor maintainability into a design. You'll sell it before it becomes a real problem, right?
Fail to factor usability into the design. Hmmm... will that affect your user base?
Fail to factor scalability into the de...........

On the other hand, ignore scaling. It makes for minutes of entertainment on slashdot.

Sunday, December 02, 2007

Building a builder for tiny lamps

In How much of an OS distro is necessary for a Pile of Lamps, I described my basic environment for exploring ideas related to Pile of Lamps.

I've tweaked the workbench a bit, so I'm now at version 0.6. The primary difference is the addition of a dedicated build server. My laptop, while sufficiently beefy, is somewhat prone to thermal problems. The burden of lengthy compile cycles was too much, so I cobbled up a dedicated server for compiles. As an added benefit, I can continue compiling while the laptop is suspended.

The build server is running the T2 distro. The more I use T2, the more I like it. Thus far, I've only encountered one problem. The installation of perl in 7.0-rc2 appears to be missing quite a bit of /usr/lib/perl5. A forced rebuild of perl rectified the problem.

Ok, so here's the current plan of attack.

I'm initially building the default distro defined by the generic T2 recipes. This is primarily to familiarize myself with T2 and to make sure the entire tool chain works.

I was hoping to have the generic build done this weekend, but the build server took priority. The generic build is compiling as I write this blog article.

Once I have a working generic T2 target, I'll shift my attention to the creation of a new target definition. While my end goal is to create a clean recipe for the desired distro, I might need to toy around with whittling down an existing recipe until I get a handle on the T2 build environment.

Until next time...

Saturday, December 01, 2007

Buglabs no longer in wood mode

I posted an link to buglabs back in September. Meanwhile, it looks like they're making progress.

They've posted some product images.

Robert Scoble posted a series of video interviews to his blog.’s really cool reconfigurable gadget in depth

This is insanely cool!

Friday, November 30, 2007

Ideas for blog articles

I've intentionally avoided writing about blogging, but an article from Dosh Dosh caught my attention.

I didn't start a blog with the intention of making money. I do, however, enjoy many of Maki's articles, so I read Dosh Dosh on a regular basis.

Maki recently posted Pattern Your Audience: How Editorial Calendars Can Increase Your Readership. The main idea of the article is that patterns in our blog postings can create expectations that tend to draw people into our blogs.

This article caught my attention for two reasons. First, approximately once a week I post an article with a gadget theme. I haven't received any feedback one way or another, but I enjoy hunting for quirky gadgets.

Second, I've participated in some conversations recently on the topic of whether to blog even if one doesn't have something interesting or unique to say. Perhaps this material will provide inspiration (for myself and others) for some interesting article content.

Here's a reproduction of Maki's list of suggestions with some of my own thoughts relative to this blog.
  1. Interviews - Smoothspan's Bob Warfield has some interesting interviews. It's been tempting to do something similar. I might consider something like this in the future.
  2. Feature Story - I've considered writing some longer articles. This is another one for future consideration.
  3. Columns - nah
  4. Reader Quiz/Q and A - Quiz: What's the square root of blue? Hmmm... maybe not...
  5. User Profile Highlight - I like this idea. If I do it, I might combine it with the interview idea listed earlier.
  6. Videos/Podcasts - I've contemplated a {pod,vid}cast. My current range of topics is probably not suitable for this type of activity. Feel free to cajole me into action.
  7. Free Reports - Sorry, this reminds me too much of my day job.
  8. Industry Roundups - see previous item
  9. Meme Days - this might be interesting
  10. Reviews - If you are a vendor, feel free to contact me for information on submitting products for review. ;-)
  11. Reader/User Polls - It's rude to refer to users as poles. Trust me, I've received correction on this.
  12. Website Highlight - I generally prefer to post links to interesting sites.
  13. Application Launch - Hmmm... Do architects code?
  14. Weekly Comic Strip - Probably not - my preferred artist is somewhat busy.
  15. Summary of Performance - only if we're talking about merit increases and yearly bonuses...
  16. User-Submitted Content - getcherownblogdude! :-P
  17. Monthly Contests/Deals - hmmm... perhaps this might be a way to reduce my spare parts inventory. :-)
  18. Monthly Post Digest - nah - this brings back memories of mailing lists delivered over uucp...
  19. Article Series - expect to see a few of these. I've recently started some mini-distro work that will be posted in a series of blog articles.
My pithy remarks notwithstanding, many of these are interesting ideas for creating topics for blog articles. Some readers might find some inspiration of their own.

As always, feel free to add your $0.02.

Meme for the day: everyone should have a meme for the day.

Thursday, November 29, 2007

Wednesday, November 28, 2007

You can't do that on the drums!

A close friend, Michael Petiford, has joined the blogging community.

Like percussion? Have a soft spot for progressive rock?

Here's a link:

Michael's Prog Drum Blog

He's a musician, so he has the obligatory myspace page.

If you like rounds, canons, and whatnot, check out his YouTube page. He has two interesting videos in which he plays a round and a canon on the drum set.

Tuesday, November 27, 2007

Perl 6 and Parrot Continue to Make Progress

chromatic has posted a Perl 6 on Parrot Roadmap Update.

It's nice to see the Perl and Parrot activity continuing to move forward.

Most people have probably written Perl off as stale and past its prime. I must confess that my perlifcation has dwindled to mere personal use.

I have no interest in language wars, but I predict we'll one day wake up to find Perl once again at the top of the interest stack.

It's not dead, it's sleeping! (and I mean that in a good way)

Monday, November 26, 2007

Five Clouds? I think not...

In One Cloud, Two Clouds, Four Clouds, More?, Bob Warfield chimes in on the topic of horizontal and vertical markets for utility compute clouds.

He references an article from Om Malik in which Om makes the case for a small number of horizontally aligned compute clouds. He also references an article from Nick Carr. Nick postulates that there is still significant value to be found in vertically aligned clouds.

Each of these articles are an interesting read, well worth the time.

I agree with Bob's assessment of the near-term future for cloud computing, namely that vertical markets are still relevant as we see utility computing become an intriguing option for many situations.

There is still significant value in the ability for companies to provide compute clouds tuned to specific industries. Different regulatory landscapes, risk profiles, and preferred architectures are enough to provide differentiation across providers. Current horizontally aligned utility compute environments are not sufficiently evolved to provide simple options for the wide variety of requirements. In addition, the vertically aligned environments are predisposed to understanding the requirements and issues particular to specific industries.

Perhaps we'll see a shift once the horizontal players start to see patterns in the solutions implemented by their vertical customers, and start offering these patterns as value-added services to their vertical customers.

Sunday, November 25, 2007

How much of an OS distro is necessary for a Pile of Lamps

The recent conversation on Pile of Lamps rekindled an interest from a previous life - distro engineering.

My current focus has been to select an initial set of workbench tools.

Here's my version 0.5:

  • VirtualBox - I could have just as easily chosen VMWAre, but this is a home project, so economy reigns. I do want to work out the process with both products, so I've probably take a look at EasyVMX in the not too distant future. Early results confirm my original suspicions - VMWare is definitely the king of the hill, but VirtualBox does nicely for now.
  • T2 - This is a recent discovery. It's a fork of Rock Linux. They provide a nice system development environment well suited for building distributions.
I can hear it now.
"But we already have gabillions of distros -don't even think about building another one!"

Like blogs, there can never be too many distros. :-)

Call it my take on Just Enough OS.

Much of what is contained in most distros is excess baggage, catering to an audience wanting all manner of doodads. Granted, there are minimalist distros, stripped down to a bare bones environment. My primary issue with these is that they tend to be focused on squeezing as much functionality into as small a space as possible. The problem space for Pile of Lamps appears to be different.

For Pile of Lamps, or JeOS, the key design goal should be to remove as much complexity as possible. The dramatic increase in the apparent number of running machines compounds the the problem of system management. Perhaps an appropriate solution is to strip the base installation to a bare minimum. Here's a quick list of the more obvious benefits.
  • Less to upgrade
  • Less to configure
  • Smaller security risk footprint
  • Faster to transport over the wire
In its most extreme form, the kernel's call to init could reference the end application, but there are several piddly details that make the use of init (or equiv) worth serious consideration. At any rate, these types of design trade-offs are at the heart of my little experiment.

Let me know what you think.

Saturday, November 24, 2007

The Blue Monster does not appear to be going home

I just finished reading Nick Malik's Focusing on Customer 2.0. I think I've been lax in reading the material coming out of the Microsoft EA folks.

Nick's article is just shy of being a manifesto for the next generation of IT. He conjures up the compelling need to change in the modern landscape of users unwilling to tolerate the clumsy environments tolerated in the past. The IT community is once again facing an assault at the walls that protect the high priesthood.

The article references a blog posting from Gabriel Morgan enumerating Gabriel's view of Customer 2.0. Gabriel's post provides some interesting insight into how Microsoft EA is trying to frame the next generation of Microsoft, Microsoft IT (and by extension portions of corporate IT).

As I read Gabriel's article, I was struck by a gradual shift from the environmental trends, to the characteristics of Customer 2.0, and finally to the characteristics of a Software+Services business model. The final list appears to be written for Marketing types. This is not an indictment, merely an observation. I hope Gabriel's future posts on this topic speak more to directly to EAs as we collectively work out how to create architectures capable of handling this next wave of change.

Regardless of your religious affiliation, the Blue Monster does not appear to be going home.

Friday, November 23, 2007

Network Learning - A Long Pause for Good Information

I recently reread Stephen Downes' article How the Net Works, in which he articulates an excellent summary of the mechanics of network learning.

I'm particularly interested in the conditions necessary for avoiding informational cascades, or groupthink. Many of the truly interesting problems in IT cannot be solved by individuals, so network learning seems promising. However, network-derived solutions carry their own set of risks.

Stephen lists four conditions for avoiding informational cascades.
  • Diversity - Did the process involve the widest possible spectrum of points of view? Did people who interpret the matter one way, and from one set of background assumptions, interact with people who approach the matter from a different perspective?
  • Autonomy - Were the individual knowers contributing to the interaction of their own accord, according to their own knowledge, values and decisions, or were they acting at the behest of some external agency seeking to magnify a certain point of view through quantity rather than reason and reflection?
  • Openness - Is there a mechanism that allows a given perspective to be entered into the system, to be heard and interacted with by others?
  • Connectivity - Is the knowledge being produced the product of an interaction between the members, or is it a (mere) aggregation of the members' perspectives? A different type of knowledge is produced one way as opposed to the other. Just as the human mind does not determine what is seen in front of it by merely counting pixels, nor either does a process intended to create public knowledge.
Few networks can boast all of these characteristics. Indeed, as I ponder some of the networks around me, it seems few can boast more than one or two. Perhaps I'm being too cynical, or perhaps it's a sign I need better networks. :-)

On the other hand, the list provides a good diagnostic for assessing information gleaned from a network. Viewed as degrees of freedom, it's easy to see how absence of a particular characteristic might affect the output.

Also, I've rearranged the list to provide a memory aid.
  • Connectivity
  • Openness
  • Diversity
  • Autonomy
This is primarily for my benefit, but others might find it useful.

Thursday, November 22, 2007



As Thanksgiving winds down to an end, I'm pondering the many things to which I give thanks.

I'll spare readers the long list, but there is one thanks worth mentioning on my blog:

Thank you!

Tuesday, November 20, 2007


In Enterprise 2.0 May be Fine for the Business, But what about the IT Department, Andrew McAfee writes on the "continued lack of enthusiasm" for E2.0 tools.

Meanwhile, Luke Kanies writes about USENIX 1.0, in which he laments the notable lack of Technorati tags referencing LISA 2007.

Are these related?

In my own experience, I've found many, if not most, members of enterprise IT organizations blissfully ignorant of *2.0 technologies. This might be slightly overstated, but the general level of understanding of the value of these technologies easily lags by several years. When queried regarding *2.0, a common response is a shrug of the shoulders. Many read blogs, but few understand the dynamics of the read-write web beyond the basics of forum posting. At best, many view it as a fad.1 At worst, many are tired and beleaguered, dreading yet another salvo of technologies designed to make their life miserable.

For my part, I've recently changed my approach to *2.0 in the workplace. Instead of evangelizing and cajoling, I've simply started to mention the tools in a matter of fact way. Of course I blog. Of course I use Of course I navigate a cloud of social networks across the Internet. I use them all the time. You ready for a coffee break?

1. More victims of MOA poisoning.

Monday, November 19, 2007

Change - ouch! Change - ouch!

Tom Haskins hit pay dirt again with Fallout from a system. Tom's article focuses on two primary views of change.

The first view presents change as something we do. We drive change. We focus on elements that "need" change. Tom reminds us that this view is fraught with danger. People resist change. Resistance creates conflict. Conflict creates heat loss.

The second view presents change as something that happens. It should be a by-product of our machinations. Let others identify appropriate changes to accommodate the end goal.

As Enterprise Architects, perhaps our primary goal should be to focus on articulating structures that prompt change, rather than playing into the hype of creating change. We dilute our need to foster a spirit of reflective practices.

In my own case, I readily admit to falling prey to the trap of change as a direct focus of effort. It's easy to see the results of the habit. My goal is to avoid even using the word for the next week. Wish me luck.

/me goes hunting for a "rubber band of behavior modification"

Sunday, November 18, 2007

The Architecture Anti-Cabal

James McGovern replied to my questioning whether a social network of Enterprise Architects will be too insular.

James believes the opposite will happen. With the recent conversation regarding whether there is too much talk about EA Process, he could very well be right.

Here's to hoping a Wehr of Enterprise Architects can sway vendor and analyst conversations away from meta-processes and MOA, towards substantive improvements in IT and business.

Guerilla Architecture Dictionary Entry: legacy - n. the technology from last year's magazines.

Saturday, November 17, 2007

Reflective Practice - An Enterprise Architecture Practice

I've recently been exploring some of the content from MIT OpenCourseWare.

I'm spending extra time watching the video files for Reflective Practice: An Approach for Expanding Your Learning Frontiers (11.965). My initial assumption was that it was learning material for improving our reflective learning skills, but it provides useful information for facilitating reflective learning.

Other Enterprise Architects might find the information helpful, particularly those interested in breaking down the walls of the ivory tower. The class provides some helpful insights and methodologies to help us interact with our constituencies. In short, people learn better when reflecting on the results of their own mistakes, not from the mistakes of others. This is deep anti-ivory-tower mojo.

I originally intended to include some examples to illustrate how the material translates to EA. Out of town visitors, however, are taking priority. Instead, I challenge EA practitioners to watch the first two sessions with an eye to analogs in their daily work. You might just find yourself making the time to watch the other sessions.

Friday, November 16, 2007

Unleash Your Lectures

Bob Warfield posted an interesting article entitled Universities Should Podcast Every Class.

In the article, Bob suggests the possibility of installing video equipment in every lecture hall, tying into class schedules, and posting the content for access by students.

For my tastes, I wish we had more efforts along the lines of the work being done by Jeff Curto.

Jeff's History of Photography podcast is an excellent example of what is possible with a very modest amount of technology. I highly recommend his lectures to anyone with a passion for photography. Those interested might also want to read Globalizing Education One Podcast at a Time, in which Jeff outlines how he creates the podcasts.

We are most definitely beyond the point where technology is the primary hindrance.

Thursday, November 15, 2007

Puget Sound Information Challenge

Mark Masterson posted an article about the Puget Sound Information Challenge.

The challenge has a very audacious goal.
... to identify and share the best information resources, tools, ideas, and
contacts in their arsenal to inform the protection of the Puget Sound.
This is the challenge! The catch is that it must be done in the next 48
Quit reading this article and go there now...

Wednesday, November 14, 2007

A Name For High-Tech Grief

Donald Knuth wants A Name for High-Tech Grief.

On his news page, Donald poses the following question.
But what do we call the combination of helplessness and agony that affects us when our computers or computer-based appliances do inexplicable things, for which there's no apparent workaround?
He provides an initial list of candidates gathered from friends.
  • cyber despair (David Eisenbud, Talin)
  • technitis (Chuck McManis)
  • compu-terror (Steve Diamond)
  • cyber burned or cyburned (Betsy Zeller, Dave Marvit)
  • digital dread (Aza Raskin)
  • techno angst (Jono DiCarlo)
  • irritable bit syndrome (Charles Merriam)
Donald suggests people test the list in real world situations. Perhaps a winner will emerge.

Does anyone have suggestions for additions to the list?

Here are a few off the top of my head:
  • Control-Alt-Hate
  • bit rage
  • computermyalgia

On the other hand, this might be a non-issue. Computers are a fad.

Wooden cases - because you need more proof that trees are an important concept in programming

From SCI FI Tech,

Hand-carved wood PC

I wonder if it's available in rack-mount.

Tuesday, November 13, 2007

Perhaps not suffciently aloof

Techbrew posted an article on Using Feeds to Discover Human Readability.

This prompted me to run the atom feed for the Aloof Architecture blog through the Juicystudio Readability Test.

Here are the results:
  • Gunning Fog Index - 11.85
  • Flesch Reading Ease - 52.80
  • Flesch-Kincaid Grade - 7.84
I might refresh my understanding of the metrics, if for no other reason than to understand the difference between the Fog index and the Flesch-Kincaid numbers.

Frivolous Fog food: pseudopseudohypoparathyroidism, honorificabilitudinitatibus, ethylenediaminetetraacetic acid, sesquipedalian

Monday, November 12, 2007

A Weyr of Enterprise Architects

James McGovern' article The One Hundred Enterprise Architects Meme got me thinking on the topic of collective nouns for Enterprise Architects.

A mild stab at Google uncovered
  • A mystery - presumes guildsmen or tradesmen
  • A glass house - yaya, keep moving
  • A jealousy - wrong kind

Here's some low-hanging fruit off the top of my head.
  • A babel
  • A governance
  • A 3-ring binder
To obvious/cliché... keep moving...

Some might suggest
  • A superfluity
If you follow the EA blogosphere, how about
Or perhaps

Beyond the Dunbar Number

Stephen Downes delivered another article full of wisdom in The Personal Network Effect.

His ideas on improving the design of social networks are particularly interesting. The basic premise is that's possible to change the point of maximal value in a social network beyond the Dunbar number by increasing the diversity of the network. Highly meshed social networks tend to result in repeat messages. At a certain point, repeat messages lose all value. A diversity in our networks tends to reduce the likelihood of these repeat messages.

This caught my attention after reading The One Hundred Enterprise Architects Meme from James McGovern.

I'm curious if there is sufficient diversity in an aggregation1 of Enterprise Architects to avoid uniformity.

1. Hmmm... collective nouns and Enterprise Architects - expect a separate posting on this topic.

Sunday, November 11, 2007

Informal or Personal Learning?

Tim Hand considers the topic of informal learning vs. personal learning in Re-form(al) learning. Tim questions the use of the term 'personal learning', as it seems to be double speak in the context of learning. He suggests that 'informal learning' might be more useful.

I've been entertaining similar thoughts, but have come to a different conclusion.

I'm not particularly pleased with either term. Both seem to carry the implication of an unstructured or unfocused learning effort.

I've been wondering whether 'self-motivated learning' is a more appropriate term. I'm not completely sold, but it seems to capture the dynamic without leading the assumptive mind astray.

I'm interested in opinions others might have on the topic.

Saturday, November 10, 2007

We're down and loving it!

Niall Sclater writes on a potential Downside of the small pieces model, in which he points to an outage at slideshare.

This particular statement caught my eye.
Of course institutional sites go down too - but it’s our business to keep them working and at least if services are hosted in-house we can pull out all the stops to ensure they’re fully functional.

This presupposes that those external sites do not have the equivalent desire to keep their services operational.

The Tower is Riddled with Networks

As part of conversation with Tom Haskins and Steve Roesler, Harold Jarche asks What business are you in?

The conversation starts with Steve Roesler descibing a life situation in which his self-employment has probably provided more options than would otherwise be available to corporate employees. In the article, he also related the gist of a conversation with an HR executive. A phrase from that conversation, "This is a business", has sparked an interesting conversation thread with Tom and Harold.

Tom enumerates several excuses offered by business for why companies wall themselves off from networks. At the heart of the concerns is a fear of losing control over their own efforts at perception management.

I particularly like one of Tom's points.
When people say "this is a business" I hear "this is not a viable network".
Harold's question asks us to look at our businesses. Are we in networks or silos?
I’ve noticed that even many so-called “new economy” companies are still based on the command & control models of the industrial age. They’re like dinosaurs wearing mammals’ clothing but they won’t be able to keep warm during the next ice age.
We are indeed creatures of habit.

For what it's worth, we also have so-called "old economy" companies with elaborate informal networks. There are, in fact, riddled with networks. We have good-old boy networks, special interest groups, rumor mills, and leaky channels to outside networks. Are they in fact mammals in disguise? Probably not, but it paints an intriguing picture.

As a change agent, my primary medium of choice is the informal internal networks. This are where conversations take place. This is where pre-emptive consensus is gained prior to gaining official sign-off. This is where the landmines are pointed out.

Friday, November 09, 2007

You got VLE in my PLE

Martin Well posted an intentionally provocative article entitled The VLE/LMS is dead.

He walks through the concept of using collections of loosely coupled 3rd party applications as an alternative to the centralized learning applications.

He is careful to point out that he's not describing a PLE. The educator is still selecting the tools.

It does seem, however, that what he describes has a strong relationship with PLEs. In fact, they are simply the server side elements many of us already use in our own PLEs.

Thursday, November 08, 2007

Lessig is Moreig

Here's a link to the Larry Lessig presentation at TED.

How creativity is being strangled by the law

Kudos to Larry for providing the voice of balance regarding the current state of copyright law.

I particularly like the way he articulates the result of enforcing the antiquated model currently in use in mainstream media. We simply push the inevitable underground.

Is there a lesson here for Enterprise Architects?

I think I'll go rummage through my old mix tapes now...

Wednesday, November 07, 2007

Gadgets - because I need more personal organizer pr0n

Multi-tool in a credit-card form factor.

BCB Mini Work Tool

Tuesday, November 06, 2007

Jive Kudos

Silicon Florist posted Jive's new space should include a bigger trophy case.

It's nice to see Oregonians doing well, particularly the startups.

Monday, November 05, 2007

Democratizing Architecture Creation

Tom Haskins' poses an intriguing possibility in his Democratizing knowledge creation.

Is it possible that self-directed learners could become the norm, rather than the exception?

The article is well worth a read. In particular, this paragraph got my attention.

We previously relied on experts to fix our ignorance, superstitious beliefs and flawed models. Now it appears that the experts have the wrong idea. Expertise cannot fix our misconceptions because it operates with a flawed premise. We cannot be fixed without getting that wrong idea ourselves. We become dependent on expertise if we fall for the common misconception of learning. We create systems where learning is a noun, experts exercise their authority over us and knowledge creation is aristocratic.
In this case, the word 'expert' is used in the context of academic credentials. That flawed premise is not, however, limited to the halls of academia.

The archetypal ivory towers of Enterprise Architecture and other governance functions are particularly prone to this same thinking. It is no accident that most Enterprises struggle with understanding the value of EA. People naturally repulse from the authority of mandated truth or 'fixing' the error of their ways. I have no doubt EA people are similarly apt to be repulsed. We are talking about a fundamental shift in the value provided by experts.

The true value of expertise comes when it is available for conversation. We refine our own understanding when we expose our knowledge to others around us, so long as we allow the interaction to occur in both directions. As Tom mentions, we reflect on the differences as we engage with our surroundings.

I've noticed an interesting phenomena in the architectural conversations of my day-to-day work. As the conversations evolve, key architectural principles and constraints (stock in trade) tend to be co-opted by others around me. I hear the principles and constraints echoed in conversations around me. The organization internalizes the knowledge and is more likely to provide productive feedback when issues arise.

Sunday, November 04, 2007

We're all just making it up as we go along

Harold Jarche posted School, Work & Improv, in which he mentions how his son is excited about an improvisation class.

Harold notes how the non-core school subjects end up being the most important in the long run. He lightly ponders a world where the education system consists of the electives and non-core topics. I will not opine on the education system, but I think most of the disciplines encountered in modern enterprises are sorely compromised by their failure to acknowledge the value of improvisation.

Any discipline not actively embracing the value of improvisation is, in my opinion, on the road to decay. We are deceived, whether by ourselves or by others, if we believe that all things can be planned or written down. Not all problems are solvable. Sometimes we need to fudge it. Sometimes we need to fake it. As long as we acknowledge it, it all has a tendency to work out in the end.

I've always been intrigued with the skills acquires from an intentional study of improvisation. I count what I learned in music improvisation as some of my most valued treasure.

Saturday, November 03, 2007

From Domain Specific Languages to Platforms

Phil Windley posted an article on his use of Domain Specific Languages for a recent endeavor. He mentions a common reaction and goes on to describe some of the benefits he gains from using a DSL.

I find it somewhat amusing that detractors often view DSLs as unnecessary or even foolish. At the same time, we see a plethora of DSLs flowing out of standards bodies and vendors.

No matter. The classical debate continues over GPL vs. DSL continues.

While I look around at the arguments for and against DSLs, I don't see much conversation regarding what seems to me to be a wonderful capability provided by DSLs. Mapping a problem space into a DSL presents the opportunity to create a platform, rather than a mere application.

Whether open to outside contributers, or only to internal ones, the platform model provides a useful vehicle for creating an 'application' designed for enhancement, particularly if we desire a wide range of enhancements.

An explicit focus on the language-oriented discipline associated with DSLs provides a useful way to introduce critical constraints, while still leaving the door open for futures changes to the constraints.

To be sure, most level 3 platforms are in the business of providing extensible domain-specific solutions of one form or another. It's noteworthy, however, that most level 3 platforms address rather narrow sets of domains. I wonder how long it will be before we see platforms providing the ability to deliver of broad spectrum of domain languages. Perhaps we will see platforms for DSL platforms.

Meanwhile, Bob Warfield mentions DSL in his Serendipity is the Key to Code Reuse. I'm not sure if he's talking about the same thing, but we're definitely using the same language.

Friday, November 02, 2007

NeoVictorian Architecture

In Good Tech Writing, Tim Bray references a wonderful series of articles from Mark Bernstein - NeoVictorian Computing.

Marks explores the topic of why we in computing are unhappy and what we might do to rectify the situation. The series is well worth the read.

While reading the articles, my mind connected some of Mark's observations with Don Norman's needs-satisfaction curve. Perhaps we are shifting our focus to our own user experience as we cross a sufficiency point with technology.

Perhaps we would also be well served to remember that an enterprise is also a technology.

Thursday, November 01, 2007

Another Day in the life of an Enterprise Architect

Mike Walker points us to A Day in the life of an Enterprise Architect.

His underlying MSDN document provides a decent summary. I must say, however, I was expecting something else upon reading the article title.

I prefer some of the quotes in the graphic on the article on his personal blog.

  • Can we support this?
  • What will that VP think of these decisions?
  • We standardized on .Net, I'm proposing something else...
  • Is this service oriented?
  • How does this fit into my portfolio? 10 years down the road?
These seem to be a more accurate reflection of the title.

Here are some that have passed my ears over the years:
  • Here's the 500-page spec - let me know if there are any show-stoppers by COB tomorrow.
  • How long will it take to get us to CMM level 5?
  • Can we avoid cabling if we install wifi everywhere?
  • Why does the printer keep jamming?
  • Can we do it without using swing space?
  • But I can buy disks at Fry's for a lot less than that!
  • Why is an architect worried about why we removed the coffee machine in engineering?
  • Do architects code?

(because printers do that..)

Wednesday, October 31, 2007

Tuesday, October 30, 2007

I See Spots

As Mark Masterson mentions in Blinded by the Lamps.., Rails on AIX using DB2 would still quality as LAMP in many circles. The real value of LAMP is not the specific packages, it's the simple pattern it provides for building web applications.

For the record, I am still not convince Pile of Lamps is a viable platform for massively scalable web applications. I do, however, claim that the topic is probably worth investigating, as there is a very real chance we'll end up with some variation of the idea.

With that out of the way, let's review LAMP - it might trigger an idea somewhere - you never know.

Here's the original form:

  • Linux
  • Apache
  • MySQL
  • P{HP,erl,ython}
This has been a stunningly effective solution stack for a variety of web applications. Obviously there are variations - bazillions. Here's a simple conversion to avoid references to specific products:
  • Scripting Language
  • HTTP
  • Operating System
  • Database
I've taken the liberty of rearranging the list for my own amusement.

Note that this version encompasses nearly all of the variations listed in the Wikipedia article on LAMP. There are a few pathological exceptions, but no matter - this is the heart of LAMP.

Note that this self-contained web application stack is reasonably well partitioned for both single-machine and multi-tier environments. Mild amounts of rework might be required, but careful thought up front can avoid most obvious gaffes. Also, each of the elements can be treated as bolt-on components within the LAMP architecture. It's completely reasonable to imagine a LAMP stack with no storage - as a proxy, for example.

Also note there is nothing inherently unRESTful about LAMP. In fact, it can be (is) used to implement all of the core components in REST (origin server, gateway, proxy, and user agent) with minimal grief.

If we constrain the Pile of Lamps style to require REST, the existing LAMP stack provides all of the building blocks, minus any management pieces we might want to add for sanity as it scales out. At the same time, we could also view any additional pieces as simply being features implemented as part of the application we're coding.

Since I've committed other non-Web indiscretions in a previous life, I decided to take another conversion pass at the list. Here's a slightly more abstracted form:
  • Storage
  • Programming Language
  • Operating System
  • Transfer Protocol
This loosens the constraint that the stack provide an HTTP service. Once in this form, it's possible to model all manner of existing Internet applications.1

1. By definition, this diverges from RESTfulness, but the Aloof Schipperke claims bonus points for an acronym with a canine theme.

Monday, October 29, 2007

Another Gang of Four

No doubt there's a big crowd of Double Bass fanatics out there in Architectureland.

I discovered a wonderful link at Jason's Heath's Double Bass Blog - The Bass Gang.

Here's a snippet of Birdland.

(A vestigial passion from a previous life - deal with it)

Sunday, October 28, 2007

Can IT 1.0 Implement Enterprise 2.0?

In The state of Enterprise 2.0, Dion Hinchcliffe provides his take on a status report for Enterprise 2.0 as of Fall 2007.

His status report comes in the form of lessons gleaned thus far.

  1. Enterprise 2.0 is going to happen in your organization with you or without you.
  2. Effective Enterprise 2.0 seems to involve more than just blogs and wikis.
  3. Enterprise 2.0 is more a state of mind than a product you can purchase.
  4. Most businesses still need to educate their workers on the techniques and best practices of Enterprise 2.0 and social media.
  5. The benefits of Enterprise 2.0 can be dramatic, but only builds steadily over time.
  6. Enterprise 2.0 doesn’t seem to put older IT systems out of business.
  7. Your organization will begin to change in new ways because of Enterprise 2.0. Be ready.
So far, so good. I'm a little surprised by phrases like "seems to" in #2 and #6, but these are nits.

Much of Dion's article focuses on the state of the tools. While the lessons seem to focus on the human component, his article is primarily focused on the state of tools. This caught my attention, since I've tended to focus on the people and process aspects when considering the use of 2.0 tools in the enterprise.

So I reviewed Andrew McAfee's definition of Enterprise 2.0.
Enterprise 2.0 is the use of emergent social software platforms within companies, or between companies and their partners or customers.
And for the discerning reader,
Platforms are digital environments in which contributions and interactions are globally visible and persistent over time.
I evidently need to pay more attention to definitions.
Note to the Schipperke: Enterprise 2.0 is specifically focused on the tools.

This is truly unfortunate.

Why, you ask?

One only needs to look to SOA for the answer.

SOA started out as an architectural style, but it has morphed into something completely different. There are several underlying forces, but the primarily reason appears to be an annexation of the term by vendors. This has resulted in a dilution of many of the potential benefits of SOA, as vendor lock-in and SOA-in-a-box makes its way into the mind-share of IT.

Enterprise 2.0, from its starting definition, appears to be predestined for the same fate.

Is it unreasonable to believe that Andrew's refinement of Enterprise 2.0 didn't go far enough?

Are we setting CIOs up by intimating that deploying Enterprise 2.0 tools will create emergent collaboration?

Saturday, October 27, 2007

Conversations Driving Actions

The conversation on a Pile of Lamps continues...

Bob Warfield chimed in with his view on Pile of Lamps, which includes a mapping of the style into the Roy Fielding's assessment framework. Perhaps most importantly, he reminds us that Roy Fielding's contribution is not limited to REST - Roy's dissertation provides a useful tool for thinking about web architectures in general.

Bob's remark about whether a Pile of Lamps was implemented in a grid or cluster struck my fancy. A recent, yet unrelated, conversation on conversations has also been lingering around in my brain. Like two molecules floating around, the two ideas combined.

Techniques for managing large sets of machines tend to either highly centralized or highly decentralized. Centralized solutions tend to come from system administration circles as ways to cope with large quantities of machines. Decentralized solutions tend to come from the parallel computing space where algorithms are designed to take advantage of large quantities of machines.

Neither approach tends to provide much coupling between management actions and application conditions. Neither approach seems well adapted for any form of semi-intelligent dynamic configuration of multi-layer web application. Neither of them seem well suited for non-trivial quantities of loosely coupled LAMP stacks.

I've recently been contemplating a constellated approach, where various subsets of machines engage in conversation amongst themselves.

As an example, a set of application servers might engage in a conversation concerning their general status. A failover heartbeat is a simple form, but the idea encompasses larger classes of interchange and generalizes the model. If the servers start to become overloaded, the conversation might converge on a decision to spawn a conversation between services capable of contributing to a solution. That conversation could, for instance, result in a decision to add another server to the pool.

Once an additional server is made available, the application pool would be responsible for initiating the provisioning of the new server and for integrating it into the pool. The information necessary for proper provisioning are and integration are contained within the application pool rather than some external source.

Basically, it's a pull rather than push model, with localized conversations driving actions.

A fully developed model for inter-machine decisions would most likely become unbearably complex and subject to emergent behaviors, but a basic implementation should be capable of providing an adequate framework for many situations.

I haven't completely thought through the idea, but I though I'd toss it out for comments.

Friday, October 26, 2007

I won't be wrong if you don't talk

A recent article by Tom Haskins, Outgrowing reflexive thinking, explores a barrier to reflective thinking.

This article caught my attention while considering mechanisms used as substitutes for conversations.

Perhaps a penchant for one-way communication is merely one of the defense mechanisms of our reflexive thinking.

Thursday, October 25, 2007

Computer Hardware Pr0n

Guy Kawasaki tempts us with pictures from Core Memory: A Visual Survey of Vintage Computers.

Oh - gotta go - here comes the wife.

Wednesday, October 24, 2007

Wood - because trees are an important concept in programming

(note to self: consider adding "logging" label to this post)

Tuesday, October 23, 2007

ABC - Architecture By Conversation

James McGovern has written a wonderfully thoughtful article entitled Closing Thoughts on the Tulsa Tech Fest.

Among other things, he mentions conversations uncorrupted by canines and equines. He reminds us that discussions of technology should not be distant memories. He reminds us of our original inspiration.

As I read his article, I pondered the topic of conversations.

My first thought was the now commonly uttered phrase

Markets are conversations
To be sure, some might be tempted to convert this into some argument for business 2.0 - a whirlwind of technology for enabling the workers to collaborate share. That's fine, but it's not quite what I had in mind.

As I continued to ponder the topic, I came up with a list which reflects part of the problem.
  • Projects are not conversations
  • Meetings are not conversations
  • Presentations are not conversations
  • Email storms are not conversations
Hmm... Too bad we're talking about the lynchpins of modern business.

Ok, so what do other Enterprise Architects do to directly foster and participate in conversations? I am honestly interested in knowing. I'm not just talking about the CxOs and Veeps - I'm talking about the techie types still in touch with the passions of our youth...

Monday, October 22, 2007


It seems appropriate for an Aloof Schipperke to post a link to the following article.

How to use a blunt instrument to sharpen your saw


Courtesy of Stephen Downes, in Information E/Revoluton and a Vision of Students Today:

ervolution - the endorsement of change, or change itself, as experienced by those who are not sure they want it
I don't know about you, but I anticipate several opportunities for using the word in the relatively near future.

Sunday, October 21, 2007

New House Rule: Non-Compliance is now Mandatory

In Circumventing the CIO - What's the Harm?, Andy Blumenthal enumerates some of the potential issues when technology is introduced outside of normal IT channels.

In simplest terms, what Andy describes is a classic conflict between IT and the rest of the company. To complicate matters, the ever increasing consumerization of IT makes it easier to bypass the traditional list of project hurdles imposed by most IT organizations.

I am, however, somewhat dismayed at the list Andy provides. Despite his reminder about striking a balance, the list might lead one to believe that there is no room for risk in business. On the contrary, business is about managing, sometimes leveraging, risk. It is perfectly reasonable to expect that business can tolerate a certain amount of non-compliance to the risk aversion common in IT. In fact, there are circumstances where it must be part of a core business strategy. The key, as always, is to ensure the level of risk aligns with the needs of the enterprise.

If the level of non-compliance is out of alignment with business needs, the problem still reduces to one of conflict, rather than non-compliance. Regardless of whether IT is overly strict or the user is overly cavalier, the conflict is best served if IT starts adopting a posture more in line with basic conflict resolution/management/transformation concepts.

To be fair, Andy works in an industry familiar with bureaucracy and the command-and-control model. At the same time, it seems all IT organizations incur another type of risk if they do not allow a certain amount of free movement around the edges.

(Bonus points for fostering and participating in that free movement around the edges)

Saturday, October 20, 2007

More logs on the fire

Log management - the next hot meme!

Hmm... maybe not...

Well, at any rate, an article from Jon Oltsik caught my eye.

In The invisible log data explosion, Jon reports on ESG research regarding how much log data is being processes by large companies.

This lends weight to Tim Bray's Wide Finder as more than a mere thought experiment. Many large companies are currently processing more than 1TB of log data per month, and 10TB per month is within sight.

As Tim mentions, the most common techniques for processing log data do not cleanly scale into the impending multicpucore system architectures.

Jon reiterates the growing need for a serious look at how log data is processed.

My contention is that soon we will be talking about log management architecture and log management services the same way we discuss SOA and business intelligence today.
Regardless of whether it attains the venderprise status of SOA and BI, I sincerely hope we'll see more interest in fresh approaches to improving the current state of log management.

Beyond the matters of scale discussed by Tim and Jon, there are several other aspects to log management I find troublesome.
  1. The current feedback loops providing by log data tends to be glacial.
  2. The actions prompted by log events tend to involve humans. So-called self-healing sytems have been created, but many suffer from narrow application or excessive buzzword.
  3. The current models for log management severely inhibit the amount of data logged, reducing the potential value of log data.
  4. Log management is the second-class citizen of second-class citizens, relegated to an operational tool used to victimize on-call personnel, gather security events, and summarize hard-drive failures. Ok, I'm overstating somewhat, but the point stands.
In any case, are we seeing the end of script processing for log data? Let's hope so.

Logging 2.0, baby! Yeah!!

Friday, October 19, 2007

Pay no attention to the lamps behind the curtain

Bob Warfield has a pair of articles that touch on ideas I've been pondering recently.

Virtualization vs Multitenancy at Workstream: SaaS Quandry?

Lack of Good Platforms is Stunting SaaS and Business Web 2.0

The two articles provide interesting analysis of key issues in SaaS and PaaS.

The connection between these topics and a recent conversations about Lamps (also here) with Mark Masterson seems particularly interesting.

For example, it is possible to implement the logical equivalent of a multitenant environment using a pile of lamps. It would require a level of design not normally found in cluster and virtualization environments, but it's possible. At the very least, it could provide useful fodder for other people interested in the topic.

Whether it makes sense is another topic, but what the hey - it's all part of a conversation...

Wednesday, October 17, 2007

The Pendulum Flows

In Lamps, lighting our way..., Mark Masterson mentions the age old topic of centralization vs. decentralization. The pendulum swings.

But wait...

Is the vacillation as real as it appears?

Is it more appropriate to view it as a one way flow, with technology generally moving from the edge to the center?

Here's the idea.

There is a relatively steady amount of centralized IT technology. The amount is usually governed by some high-level metric such as percentage of revenue. This amount varies by company, but tends to act as a top-level volume knob for centrally captured budget dollars spent on 'IT'.

The decentralized introduction of technology at the edge is governed by two primary factors - a threshold dollar amount and the degree to which a centralized resource can provide results to the edge.

Since, by definition, a central IT can only provide a finite amount of capability to the edge, there is always an amount of unfulfilled demand.

Once the cost of a new technology drops below a threshold dollar amount, the edge is now faced with the possibility of acquiring the technology. This is particularly true if the new technology can bring a measurable value to the edge. Profits tend to trump politics.

A sufficiently large level of adoption of a particular technology at the edge will gradually prompt a centralization of that technology. Meanwhile back at the top-level PoR volume knob, older technologies are pushed out to make room for the new.

One might argue that there is still a pendular swing, with the power of technology shifting between the edge and the core. It seems reasonable to wonder if this is merely an artifact of our own interests in particular classes of technology, or the result of natural changes in the rate with which edge technologies migrate to the core. Would we still view it as pendular if we accounted for all technologies subject to the phenomena?

What do you think?

Tuesday, October 16, 2007

Write a module - then teach them how to use it

Stephen Downes posted a blurb on eLML.

This looks interesting. It's an XML framework for creating e-learning content in Eclipse. It supports a interesting array of output formats (e.g. xhtml, PDF, ODF, IMS, and SCORM).

It's intriguing to imagine a certain class of subject matter experts using it to provide e-learning content to unleash the teacher in us all.

Is it difficult to imagine documentation as educational material, perhaps turning it into SCORM packages for an LMS?

I'm not sure I like the tight integration with Eclipse, but some readers might consider that a feature.

In case you're interested, the developer have plans for a WYSIWYG web interface in late 20071.

1. /me checks watch

Monday, October 15, 2007

Musical Instruments - because not all gadgets require electricity

In Hobbies that do not involve gadgets, Gadgeteer Julie poses the topic of hobbies with no gadgetry involved.

She mentions her acoustic guitar and dulcimer, and asks readers to post their own non-gadget related hobbies.


I hate to break it to her, but I'm thinking musical instruments are probably one of the original gadgets.

(Show of hands - how many starving techies are reformed starving musicians?)

Sunday, October 14, 2007

A Pile of Lamps

This posting was triggered from a conversation with Mark Masterson (available here), which sprang from topics mentioned in Tim Bray's Wide Finder blog swarm.

As we watch a fundamental shift in system architecture, migrating towards large quantities of processors/cores on single machines, many people are discussing corresponding need for shifts in programming languages, programmer skills, and techniques.

Rather than assuming computer languages and coding techniques will evolve to meet this change, it seems prudent to consider the possibility of an environment containing thousands of little SOA web services and various other LAMP-based applications.

I'm less interested in whether this is the right thing, as much as I'm interested in the possibility of anticipating it and preemptively shaping it to avoid some of the potential pitfalls.

Or as Mark puts it,

I think the great challenge for us is how to find a way to enable and allow that paradigm, as a welcome and valid part of our EA.
Is the answer a combination of LAMP, embedded computing, cluster management, and virtualization?

For its part, the LAMP stack brings an emphasis on
  • HTTP-based services
  • extensibility
  • multiple storage models (DB and Filesystem)
  • System-level containment
  • Commodity components
Embedded computing brings an emphasis on
  • Minimalism
  • Asymmetrical host and target environments
  • System-level packaging
  • Black-box
  • Minimize ongoing support
Cluster management brings an emphasis on
  • Uniformity of management
  • Holistic view of many machines
Virtualization provides the key to carving multicpucore machines into multiple bite-sized chunks, each providing a convenient container for a LAMP stack.

Here is a parting thought. While cogitating on the topic, I recalled a posting from Andrew Clifford. In The dismantling of IT, he postulates a simplification of IT Architecture. I mildly questioned the viability in this post, but I'm beginning to wonder if some variation of this approach is capable of providing part of what Andrew is pondering.

Saturday, October 13, 2007

Top 10 Answers to "Do IT Architects Code?"

Tim Bray posted a follow-up to his Wide Finder series - WF IX - More, More, More. I was amused to see a reference to one of my postings.

Alas, no - I did not provide any code. This, of course, prompted an age-old question.

Without further ado, here are the top 10 answers to the question "Do IT Architects code?".

A. Why resort to code when we can torment you with PowerPoint
9. Gimme a sec - the answer is somewhere in one of these 3-ring binders
8. Only when explaining things to software weenies
7. Yes, but it was decommed around December of 99
6. No, but I know several coders
5. Most definitely! Right here in this box, next to the "Unit Test" box
4. Do models count? They're in UML!
3. map(chr hex, unpack '(A2)*', '596573')
2. Only when nobody is looking
1. Hehe... Coding is so pre-web4.2...


Friday, October 12, 2007

Mmm... Apples...

In Newsletters that teach, Matt Linderman points out a good example of how to help users kick ass.

Providing factual information on a software feature is fine, but what if it's combined with a broader tip that helps the user apply the feature to their advantage.

This idea can be extended beyond simple tips and tricks. The idea is simple - provide the basic information and augment it with guidance on potential applications of the information.

Does our documentation provide guidance on when (or when not) to use a feature?

When we are asked to provide information on a topic, do we take the extra step to provide perspective to allow others to reapply the information to other situations?

Does our information help user become better users?

Thursday, October 11, 2007

LucidTouch - because I'll need another gadget in the future

Microsoft Research and Mitsubishi Electric Research Labs have teamed up to create LucidTouch, a prototype device with an interesting twist on touch display.


Wednesday, October 10, 2007

Pumpkin Futures in IT

In Enterprise Architecture and Agile, Ed Gibbs chimes in on the topic of whether EA and Agile are inherently incompatible.

The author of the article he references takes the stance that the two are probably incompatible. Ed differs from this view. He invokes the reminder that any incompatibility is probably due to an ivory tower model for EA.

I partially agree, but would like to contribute another facet to the overall conversation.

First off, I'm near the front of the line when it comes to the topic of EA needing to improve its ability to provide substance and value. By some accounts, some of us will turn into pumpkins around the year 2012.

Agile methods can indeed be a useful addition to the EA tool belt. It should be noted, however, that not all aspects of an architecture are an obvious fit for an incremental or agile model. When they do, the increments are sometimes on a longer time scale than those encountered in most agile development efforts.

This brings me to my real point.

There is a tyranny in both the EA ivory tower and the scrum. There is also a tyranny in PMBOK and ITIL. All of them seem highly prone to multiculturalism in the name of standardization and simplification.

We are allergic to diversity. However... Diversity leads to resilience. Resilience leads to survival.

I see no inherent incompatibility between EA and Agile.

Tuesday, October 09, 2007

Feeding the Foragers

Continuing my contemplation of ways to improve the effectiveness of PLEs, I noticed an article from Clark Quinn, Formal, informal, and information foraging.

In addition to providing several interesting links, he reminds us of the role information architecture can play in this age of foraging. If there is play for PLEs as an application I suspect it will come in the form of an interface that restructures the data to increase its absorption rate.

Clark also has an interesting article, Filling the informal gap, which asks whether there is an important middle ground between formal and informal learning. Perhaps this is the sweet spot for subject matter experts and thought leaders to contribute learning material.

On a mildly related topic, it looks like Mike Kavis has made interesting progress towards turning Enterprise Architecture into a collaborative endeavor. I hope he provides future updates on his efforts.

Monday, October 08, 2007

My PLE ate your Web2.0

Tom Haskins posted in interesting article regarding Personal Learning Environments, growing changing learning creating: The next killer app?

The PLE is indeed already here. It is all around us.

I doubt most people view it as an explicit learning platform, but perhaps that is its most powerful feature.

My recent thoughts on PLEs have centered around what can be done to improve their effectiveness as a learning platform.

What improvements can educators bring if they leave the testing and tracking behind?

Is learning object metadata a useful addition to "web2.0"?

Is it better to avoid viewing the PLE as a learning platform? Is its value contingent on its current zen-like state?

Sunday, October 07, 2007

Notice: An Update for your Perlang FPGA Cluster is Available

Ok, I've been bit by Tim Bray's Wide Finder meme.

I noticed the conversation swarm as it bubbled up, but didn't pay too much attention. Mark Masterson's article It's Time to Stop Calling Circuits "Hardware" caught my attention, as I have pondered the plasticity of the boundary between hardware and software in a previous life.

So I've been digesting the conversation swarm. It's one heck of an interesting read.

Tim presents a problem case that frames a fundamental shift occurring in modern CPU/system architectures. The shift is moving us away from ever increasing CPU speeds towards ever increasing CPU counts. Certain classes of problems are extremely well suited for the shift to multcpucore architectures. Other problems gain no direct benefit, particularly if they are migrated without change. Tim uses the problem of summarizing log file data as an example of this latter case.

Without brainpower focused on this aspect of the problem, the techniques being employed to increase aggregate compute capacity will not provide much benefit for many of the common tasks performed in IT shops.

There are three interesting aspects to Tim's conversation swarm. Two are explicit. The third is implicit.

The first aspect consists of all the solutions for the stated goal - how to leverage the latest trend in processor/system architectures for the seemingly mundane task of processing log data.

For what it's worth, here are my first thoughts on the problem of leveraging multiple cpus ofor the task of processing log data. My preference leans towards use of existing technology, most likely to be implemented by the people most likely to feel the pain.

Divide and conquer: (the sysadmin in me)

  • Coerce the logging engine(s) to dump into multiple log files (to multiple disks or disk channels if necessary).
  • Run a pile of processes to process the log files independently.
  • Consolidate the data - either as post processing or incrementally via some form of IPC.
  • The choice of language is immaterial, but history would probably vote for perl or shell goop
This smacks of the type of solutions Tim sees existing in most IT shops. It's blunt. It's probably sufficient enough to allow us to move on the next point of pain. It's almost completely devoid of any interest from a software engineering perspective.

Streams and Trigger: (mentioned in the conversation comments)
  • Hook into the log stream(s)
  • Spawn readers for the various data collection functions
  • Send events from the log stream(s) to the readers, processing the data as it's received
This is generally a solved problem using any one of a variety of existing programming languages/tools.

Neither of these two solutions are particularly interesting, but I imagine they are the most likely to be implemented in the wild.

My final offering is more of a meta solution.
  • Formulate a red herring idea
  • Pose it to a bunch of brainy people
  • Watch them chew on it
  • Gain new insight
Oh wait. I'm getting that deja vu feeling. :-)

The second interesting aspect of the conversation swarm is the rumination over the relationship between computer languages and the shift in cpu/system architectures.

One participant (sorry, can't recall the link) offered the suggestion that it's probably easier to improve a language like Erlang than it is to modify the mainstream languages to provide the capabilities inherent in Erlang.

I don't disagree with this point of view, but Tim's point regarding the widespread use of perl/awk/etc points to a fundamental fact in IT shops - the tool must be wickedly effective at getting the job done. Optimal performance is often optional.

So how to effectively use 64-1024 CPU machines?

First off, who says our currently technologies are effectively using the existing architectures? Follow things from the hardware up the application stack - it staggers the mind.

The reality is we seldom go back and fix. We come up with clever ways to incrementally capitalize on architectural changes. We reframe existing code in ways that take advantage of changes in architectures. I'm overgeneralizing somewhat, but no matter.

At the risk of sounding like a pessimist, I think we'll end up with thousands of little SOA web services engines. Each one handling a single piece. Each one with its own HTTP stack. Each one using PHP/Perl/Ruby/etc to implement the service functions. Each one sitting on top of a tiny little mysql database. Eeeep! I just scared myself - better drop this line of thought. I'll have nightmares for weeks.

The third interesting aspect of the conversation is how it shows some of the most important characteristics of the modern concept of networks vs. groups. It's decentralized, it's unlikely to be swayed by an alpha geek, it creates a variety of unanticipated results, it's a bit messy, and it provides fertile ground for exploring the topic at some point in the future.

Good stuff!