Wired Science - Dangerous Science
Wednesday, October 31, 2007
Tuesday, October 30, 2007
As Mark Masterson mentions in Blinded by the Lamps.., Rails on AIX using DB2 would still quality as LAMP in many circles. The real value of LAMP is not the specific packages, it's the simple pattern it provides for building web applications.
For the record, I am still not convince Pile of Lamps is a viable platform for massively scalable web applications. I do, however, claim that the topic is probably worth investigating, as there is a very real chance we'll end up with some variation of the idea.
With that out of the way, let's review LAMP - it might trigger an idea somewhere - you never know.
Here's the original form:
- Scripting Language
- Operating System
Note that this version encompasses nearly all of the variations listed in the Wikipedia article on LAMP. There are a few pathological exceptions, but no matter - this is the heart of LAMP.
Note that this self-contained web application stack is reasonably well partitioned for both single-machine and multi-tier environments. Mild amounts of rework might be required, but careful thought up front can avoid most obvious gaffes. Also, each of the elements can be treated as bolt-on components within the LAMP architecture. It's completely reasonable to imagine a LAMP stack with no storage - as a proxy, for example.
Also note there is nothing inherently unRESTful about LAMP. In fact, it can be (is) used to implement all of the core components in REST (origin server, gateway, proxy, and user agent) with minimal grief.
If we constrain the Pile of Lamps style to require REST, the existing LAMP stack provides all of the building blocks, minus any management pieces we might want to add for sanity as it scales out. At the same time, we could also view any additional pieces as simply being features implemented as part of the application we're coding.
Since I've committed other non-Web indiscretions in a previous life, I decided to take another conversion pass at the list. Here's a slightly more abstracted form:
- Programming Language
- Operating System
- Transfer Protocol
1. By definition, this diverges from RESTfulness, but the Aloof Schipperke claims bonus points for an acronym with a canine theme.
Monday, October 29, 2007
No doubt there's a big crowd of Double Bass fanatics out there in Architectureland.
I discovered a wonderful link at Jason's Heath's Double Bass Blog - The Bass Gang.
Here's a snippet of Birdland.
(A vestigial passion from a previous life - deal with it)
Sunday, October 28, 2007
In The state of Enterprise 2.0, Dion Hinchcliffe provides his take on a status report for Enterprise 2.0 as of Fall 2007.
His status report comes in the form of lessons gleaned thus far.
- Enterprise 2.0 is going to happen in your organization with you or without you.
- Effective Enterprise 2.0 seems to involve more than just blogs and wikis.
- Enterprise 2.0 is more a state of mind than a product you can purchase.
- Most businesses still need to educate their workers on the techniques and best practices of Enterprise 2.0 and social media.
- The benefits of Enterprise 2.0 can be dramatic, but only builds steadily over time.
- Enterprise 2.0 doesn’t seem to put older IT systems out of business.
- Your organization will begin to change in new ways because of Enterprise 2.0. Be ready.
Much of Dion's article focuses on the state of the tools. While the lessons seem to focus on the human component, his article is primarily focused on the state of tools. This caught my attention, since I've tended to focus on the people and process aspects when considering the use of 2.0 tools in the enterprise.
So I reviewed Andrew McAfee's definition of Enterprise 2.0.
Enterprise 2.0 is the use of emergent social software platforms within companies, or between companies and their partners or customers.And for the discerning reader,
Platforms are digital environments in which contributions and interactions are globally visible and persistent over time.I evidently need to pay more attention to definitions.
Note to the Schipperke: Enterprise 2.0 is specifically focused on the tools.
This is truly unfortunate.
Why, you ask?
One only needs to look to SOA for the answer.
SOA started out as an architectural style, but it has morphed into something completely different. There are several underlying forces, but the primarily reason appears to be an annexation of the term by vendors. This has resulted in a dilution of many of the potential benefits of SOA, as vendor lock-in and SOA-in-a-box makes its way into the mind-share of IT.
Enterprise 2.0, from its starting definition, appears to be predestined for the same fate.
Is it unreasonable to believe that Andrew's refinement of Enterprise 2.0 didn't go far enough?
Are we setting CIOs up by intimating that deploying Enterprise 2.0 tools will create emergent collaboration?
Saturday, October 27, 2007
The conversation on a Pile of Lamps continues...
Bob Warfield chimed in with his view on Pile of Lamps, which includes a mapping of the style into the Roy Fielding's assessment framework. Perhaps most importantly, he reminds us that Roy Fielding's contribution is not limited to REST - Roy's dissertation provides a useful tool for thinking about web architectures in general.
Bob's remark about whether a Pile of Lamps was implemented in a grid or cluster struck my fancy. A recent, yet unrelated, conversation on conversations has also been lingering around in my brain. Like two molecules floating around, the two ideas combined.
Techniques for managing large sets of machines tend to either highly centralized or highly decentralized. Centralized solutions tend to come from system administration circles as ways to cope with large quantities of machines. Decentralized solutions tend to come from the parallel computing space where algorithms are designed to take advantage of large quantities of machines.
Neither approach tends to provide much coupling between management actions and application conditions. Neither approach seems well adapted for any form of semi-intelligent dynamic configuration of multi-layer web application. Neither of them seem well suited for non-trivial quantities of loosely coupled LAMP stacks.
I've recently been contemplating a constellated approach, where various subsets of machines engage in conversation amongst themselves.
As an example, a set of application servers might engage in a conversation concerning their general status. A failover heartbeat is a simple form, but the idea encompasses larger classes of interchange and generalizes the model. If the servers start to become overloaded, the conversation might converge on a decision to spawn a conversation between services capable of contributing to a solution. That conversation could, for instance, result in a decision to add another server to the pool.
Once an additional server is made available, the application pool would be responsible for initiating the provisioning of the new server and for integrating it into the pool. The information necessary for proper provisioning are and integration are contained within the application pool rather than some external source.
Basically, it's a pull rather than push model, with localized conversations driving actions.
A fully developed model for inter-machine decisions would most likely become unbearably complex and subject to emergent behaviors, but a basic implementation should be capable of providing an adequate framework for many situations.
I haven't completely thought through the idea, but I though I'd toss it out for comments.
Friday, October 26, 2007
A recent article by Tom Haskins, Outgrowing reflexive thinking, explores a barrier to reflective thinking.
This article caught my attention while considering mechanisms used as substitutes for conversations.
Perhaps a penchant for one-way communication is merely one of the defense mechanisms of our reflexive thinking.
Thursday, October 25, 2007
Wednesday, October 24, 2007
Tuesday, October 23, 2007
James McGovern has written a wonderfully thoughtful article entitled Closing Thoughts on the Tulsa Tech Fest.
Among other things, he mentions conversations uncorrupted by canines and equines. He reminds us that discussions of technology should not be distant memories. He reminds us of our original inspiration.
As I read his article, I pondered the topic of conversations.
My first thought was the now commonly uttered phrase
Markets are conversationsTo be sure, some might be tempted to convert this into some argument for business 2.0 - a whirlwind of technology for enabling the workers to collaborate share. That's fine, but it's not quite what I had in mind.
As I continued to ponder the topic, I came up with a list which reflects part of the problem.
- Projects are not conversations
- Meetings are not conversations
- Presentations are not conversations
- Email storms are not conversations
Ok, so what do other Enterprise Architects do to directly foster and participate in conversations? I am honestly interested in knowing. I'm not just talking about the CxOs and Veeps - I'm talking about the techie types still in touch with the passions of our youth...
Monday, October 22, 2007
Courtesy of Stephen Downes, in Information E/Revoluton and a Vision of Students Today:
ervolution - the endorsement of change, or change itself, as experienced by those who are not sure they want itI don't know about you, but I anticipate several opportunities for using the word in the relatively near future.
Sunday, October 21, 2007
In Circumventing the CIO - What's the Harm?, Andy Blumenthal enumerates some of the potential issues when technology is introduced outside of normal IT channels.
In simplest terms, what Andy describes is a classic conflict between IT and the rest of the company. To complicate matters, the ever increasing consumerization of IT makes it easier to bypass the traditional list of project hurdles imposed by most IT organizations.
I am, however, somewhat dismayed at the list Andy provides. Despite his reminder about striking a balance, the list might lead one to believe that there is no room for risk in business. On the contrary, business is about managing, sometimes leveraging, risk. It is perfectly reasonable to expect that business can tolerate a certain amount of non-compliance to the risk aversion common in IT. In fact, there are circumstances where it must be part of a core business strategy. The key, as always, is to ensure the level of risk aligns with the needs of the enterprise.
If the level of non-compliance is out of alignment with business needs, the problem still reduces to one of conflict, rather than non-compliance. Regardless of whether IT is overly strict or the user is overly cavalier, the conflict is best served if IT starts adopting a posture more in line with basic conflict resolution/management/transformation concepts.
To be fair, Andy works in an industry familiar with bureaucracy and the command-and-control model. At the same time, it seems all IT organizations incur another type of risk if they do not allow a certain amount of free movement around the edges.
(Bonus points for fostering and participating in that free movement around the edges)
Saturday, October 20, 2007
Log management - the next hot meme!
Hmm... maybe not...
Well, at any rate, an article from Jon Oltsik caught my eye.
In The invisible log data explosion, Jon reports on ESG research regarding how much log data is being processes by large companies.
This lends weight to Tim Bray's Wide Finder as more than a mere thought experiment. Many large companies are currently processing more than 1TB of log data per month, and 10TB per month is within sight.
As Tim mentions, the most common techniques for processing log data do not cleanly scale into the impending multicpucore system architectures.
Jon reiterates the growing need for a serious look at how log data is processed.
My contention is that soon we will be talking about log management architecture and log management services the same way we discuss SOA and business intelligence today.Regardless of whether it attains the venderprise status of SOA and BI, I sincerely hope we'll see more interest in fresh approaches to improving the current state of log management.
Beyond the matters of scale discussed by Tim and Jon, there are several other aspects to log management I find troublesome.
- The current feedback loops providing by log data tends to be glacial.
- The actions prompted by log events tend to involve humans. So-called self-healing sytems have been created, but many suffer from narrow application or excessive buzzword.
- The current models for log management severely inhibit the amount of data logged, reducing the potential value of log data.
- Log management is the second-class citizen of second-class citizens, relegated to an operational tool used to victimize on-call personnel, gather security events, and summarize hard-drive failures. Ok, I'm overstating somewhat, but the point stands.
Logging 2.0, baby! Yeah!!
Friday, October 19, 2007
Bob Warfield has a pair of articles that touch on ideas I've been pondering recently.
Virtualization vs Multitenancy at Workstream: SaaS Quandry?
Lack of Good Platforms is Stunting SaaS and Business Web 2.0
The two articles provide interesting analysis of key issues in SaaS and PaaS.
The connection between these topics and a recent conversations about Lamps (also here) with Mark Masterson seems particularly interesting.
For example, it is possible to implement the logical equivalent of a multitenant environment using a pile of lamps. It would require a level of design not normally found in cluster and virtualization environments, but it's possible. At the very least, it could provide useful fodder for other people interested in the topic.
Whether it makes sense is another topic, but what the hey - it's all part of a conversation...
Wednesday, October 17, 2007
In Lamps, lighting our way..., Mark Masterson mentions the age old topic of centralization vs. decentralization. The pendulum swings.
Is the vacillation as real as it appears?
Is it more appropriate to view it as a one way flow, with technology generally moving from the edge to the center?
Here's the idea.
There is a relatively steady amount of centralized IT technology. The amount is usually governed by some high-level metric such as percentage of revenue. This amount varies by company, but tends to act as a top-level volume knob for centrally captured budget dollars spent on 'IT'.
The decentralized introduction of technology at the edge is governed by two primary factors - a threshold dollar amount and the degree to which a centralized resource can provide results to the edge.
Since, by definition, a central IT can only provide a finite amount of capability to the edge, there is always an amount of unfulfilled demand.
Once the cost of a new technology drops below a threshold dollar amount, the edge is now faced with the possibility of acquiring the technology. This is particularly true if the new technology can bring a measurable value to the edge. Profits tend to trump politics.
A sufficiently large level of adoption of a particular technology at the edge will gradually prompt a centralization of that technology. Meanwhile back at the top-level PoR volume knob, older technologies are pushed out to make room for the new.
One might argue that there is still a pendular swing, with the power of technology shifting between the edge and the core. It seems reasonable to wonder if this is merely an artifact of our own interests in particular classes of technology, or the result of natural changes in the rate with which edge technologies migrate to the core. Would we still view it as pendular if we accounted for all technologies subject to the phenomena?
What do you think?
Tuesday, October 16, 2007
Stephen Downes posted a blurb on eLML.
This looks interesting. It's an XML framework for creating e-learning content in Eclipse. It supports a interesting array of output formats (e.g. xhtml, PDF, ODF, IMS, and SCORM).
It's intriguing to imagine a certain class of subject matter experts using it to provide e-learning content to unleash the teacher in us all.
Is it difficult to imagine documentation as educational material, perhaps turning it into SCORM packages for an LMS?
I'm not sure I like the tight integration with Eclipse, but some readers might consider that a feature.
In case you're interested, the developer have plans for a WYSIWYG web interface in late 20071.
1. /me checks watch
Monday, October 15, 2007
In Hobbies that do not involve gadgets, Gadgeteer Julie poses the topic of hobbies with no gadgetry involved.
She mentions her acoustic guitar and dulcimer, and asks readers to post their own non-gadget related hobbies.
I hate to break it to her, but I'm thinking musical instruments are probably one of the original gadgets.
(Show of hands - how many starving techies are reformed starving musicians?)
Sunday, October 14, 2007
This posting was triggered from a conversation with Mark Masterson (available here), which sprang from topics mentioned in Tim Bray's Wide Finder blog swarm.
As we watch a fundamental shift in system architecture, migrating towards large quantities of processors/cores on single machines, many people are discussing corresponding need for shifts in programming languages, programmer skills, and techniques.
Rather than assuming computer languages and coding techniques will evolve to meet this change, it seems prudent to consider the possibility of an environment containing thousands of little SOA web services and various other LAMP-based applications.
I'm less interested in whether this is the right thing, as much as I'm interested in the possibility of anticipating it and preemptively shaping it to avoid some of the potential pitfalls.
Or as Mark puts it,
I think the great challenge for us is how to find a way to enable and allow that paradigm, as a welcome and valid part of our EA.Is the answer a combination of LAMP, embedded computing, cluster management, and virtualization?
For its part, the LAMP stack brings an emphasis on
- HTTP-based services
- multiple storage models (DB and Filesystem)
- System-level containment
- Commodity components
- Asymmetrical host and target environments
- System-level packaging
- Minimize ongoing support
- Uniformity of management
- Holistic view of many machines
Here is a parting thought. While cogitating on the topic, I recalled a posting from Andrew Clifford. In The dismantling of IT, he postulates a simplification of IT Architecture. I mildly questioned the viability in this post, but I'm beginning to wonder if some variation of this approach is capable of providing part of what Andrew is pondering.
Saturday, October 13, 2007
Tim Bray posted a follow-up to his Wide Finder series - WF IX - More, More, More. I was amused to see a reference to one of my postings.
Alas, no - I did not provide any code. This, of course, prompted an age-old question.
Without further ado, here are the top 10 answers to the question "Do IT Architects code?".
A. Why resort to code when we can torment you with PowerPoint
9. Gimme a sec - the answer is somewhere in one of these 3-ring binders
8. Only when explaining things to software weenies
7. Yes, but it was decommed around December of 99
6. No, but I know several coders
5. Most definitely! Right here in this box, next to the "Unit Test" box
4. Do models count? They're in UML!
3. map(chr hex, unpack '(A2)*', '596573')
2. Only when nobody is looking
1. Hehe... Coding is so pre-web4.2...
Friday, October 12, 2007
In Newsletters that teach, Matt Linderman points out a good example of how to help users kick ass.
Providing factual information on a software feature is fine, but what if it's combined with a broader tip that helps the user apply the feature to their advantage.
This idea can be extended beyond simple tips and tricks. The idea is simple - provide the basic information and augment it with guidance on potential applications of the information.
Does our documentation provide guidance on when (or when not) to use a feature?
When we are asked to provide information on a topic, do we take the extra step to provide perspective to allow others to reapply the information to other situations?
Does our information help user become better users?
Thursday, October 11, 2007
Microsoft Research and Mitsubishi Electric Research Labs have teamed up to create LucidTouch, a prototype device with an interesting twist on touch display.
Wednesday, October 10, 2007
In Enterprise Architecture and Agile, Ed Gibbs chimes in on the topic of whether EA and Agile are inherently incompatible.
The author of the article he references takes the stance that the two are probably incompatible. Ed differs from this view. He invokes the reminder that any incompatibility is probably due to an ivory tower model for EA.
I partially agree, but would like to contribute another facet to the overall conversation.
First off, I'm near the front of the line when it comes to the topic of EA needing to improve its ability to provide substance and value. By some accounts, some of us will turn into pumpkins around the year 2012.
Agile methods can indeed be a useful addition to the EA tool belt. It should be noted, however, that not all aspects of an architecture are an obvious fit for an incremental or agile model. When they do, the increments are sometimes on a longer time scale than those encountered in most agile development efforts.
This brings me to my real point.
There is a tyranny in both the EA ivory tower and the scrum. There is also a tyranny in PMBOK and ITIL. All of them seem highly prone to multiculturalism in the name of standardization and simplification.
We are allergic to diversity. However... Diversity leads to resilience. Resilience leads to survival.
I see no inherent incompatibility between EA and Agile.
Tuesday, October 09, 2007
Continuing my contemplation of ways to improve the effectiveness of PLEs, I noticed an article from Clark Quinn, Formal, informal, and information foraging.
In addition to providing several interesting links, he reminds us of the role information architecture can play in this age of foraging. If there is play for PLEs as an application I suspect it will come in the form of an interface that restructures the data to increase its absorption rate.
Clark also has an interesting article, Filling the informal gap, which asks whether there is an important middle ground between formal and informal learning. Perhaps this is the sweet spot for subject matter experts and thought leaders to contribute learning material.
On a mildly related topic, it looks like Mike Kavis has made interesting progress towards turning Enterprise Architecture into a collaborative endeavor. I hope he provides future updates on his efforts.
Monday, October 08, 2007
Tom Haskins posted in interesting article regarding Personal Learning Environments, growing changing learning creating: The next killer app?
The PLE is indeed already here. It is all around us.
I doubt most people view it as an explicit learning platform, but perhaps that is its most powerful feature.
My recent thoughts on PLEs have centered around what can be done to improve their effectiveness as a learning platform.
What improvements can educators bring if they leave the testing and tracking behind?
Is learning object metadata a useful addition to "web2.0"?
Is it better to avoid viewing the PLE as a learning platform? Is its value contingent on its current zen-like state?
Sunday, October 07, 2007
Ok, I've been bit by Tim Bray's Wide Finder meme.
I noticed the conversation swarm as it bubbled up, but didn't pay too much attention. Mark Masterson's article It's Time to Stop Calling Circuits "Hardware" caught my attention, as I have pondered the plasticity of the boundary between hardware and software in a previous life.
So I've been digesting the conversation swarm. It's one heck of an interesting read.
Tim presents a problem case that frames a fundamental shift occurring in modern CPU/system architectures. The shift is moving us away from ever increasing CPU speeds towards ever increasing CPU counts. Certain classes of problems are extremely well suited for the shift to multcpucore architectures. Other problems gain no direct benefit, particularly if they are migrated without change. Tim uses the problem of summarizing log file data as an example of this latter case.
Without brainpower focused on this aspect of the problem, the techniques being employed to increase aggregate compute capacity will not provide much benefit for many of the common tasks performed in IT shops.
There are three interesting aspects to Tim's conversation swarm. Two are explicit. The third is implicit.
The first aspect consists of all the solutions for the stated goal - how to leverage the latest trend in processor/system architectures for the seemingly mundane task of processing log data.
For what it's worth, here are my first thoughts on the problem of leveraging multiple cpus ofor the task of processing log data. My preference leans towards use of existing technology, most likely to be implemented by the people most likely to feel the pain.
Divide and conquer: (the sysadmin in me)
- Coerce the logging engine(s) to dump into multiple log files (to multiple disks or disk channels if necessary).
- Run a pile of processes to process the log files independently.
- Consolidate the data - either as post processing or incrementally via some form of IPC.
- The choice of language is immaterial, but history would probably vote for perl or shell goop
Streams and Trigger: (mentioned in the conversation comments)
- Hook into the log stream(s)
- Spawn readers for the various data collection functions
- Send events from the log stream(s) to the readers, processing the data as it's received
Neither of these two solutions are particularly interesting, but I imagine they are the most likely to be implemented in the wild.
My final offering is more of a meta solution.
- Formulate a red herring idea
- Pose it to a bunch of brainy people
- Watch them chew on it
- Gain new insight
The second interesting aspect of the conversation swarm is the rumination over the relationship between computer languages and the shift in cpu/system architectures.
One participant (sorry, can't recall the link) offered the suggestion that it's probably easier to improve a language like Erlang than it is to modify the mainstream languages to provide the capabilities inherent in Erlang.
I don't disagree with this point of view, but Tim's point regarding the widespread use of perl/awk/etc points to a fundamental fact in IT shops - the tool must be wickedly effective at getting the job done. Optimal performance is often optional.
So how to effectively use 64-1024 CPU machines?
First off, who says our currently technologies are effectively using the existing architectures? Follow things from the hardware up the application stack - it staggers the mind.
The reality is we seldom go back and fix. We come up with clever ways to incrementally capitalize on architectural changes. We reframe existing code in ways that take advantage of changes in architectures. I'm overgeneralizing somewhat, but no matter.
At the risk of sounding like a pessimist, I think we'll end up with thousands of little SOA web services engines. Each one handling a single piece. Each one with its own HTTP stack. Each one using PHP/Perl/Ruby/etc to implement the service functions. Each one sitting on top of a tiny little mysql database. Eeeep! I just scared myself - better drop this line of thought. I'll have nightmares for weeks.
The third interesting aspect of the conversation is how it shows some of the most important characteristics of the modern concept of networks vs. groups. It's decentralized, it's unlikely to be swayed by an alpha geek, it creates a variety of unanticipated results, it's a bit messy, and it provides fertile ground for exploring the topic at some point in the future.
Saturday, October 06, 2007
In What an Enterprise Architect needs to know, Adrian Grigoriu lists a plethora of topics an EA team is expected to navigate.
I am conflicted in my opinion of the list.
On one hand, it seems woefully incomplete. It omits several things I use on the regular basis. I'm tempted to enumerate them, but fear making the list even more daunting.
On the other hand, the list seems to focus on the morass of technologies encountered in a modern IT organization. While I'm tempted to rant on this topic, I'm more interested in focusing on what an EA needs to understand. To this point, many technologies can digested based on a several key points.
- Some things never change
- Some things are really just a variation on another thing
- Some things are simpler than they appear
- All things came from another thing
How to predict consequencesThese seem the more difficult topics to master.
How to read
How to distinguish truth from fiction
How to empathize
How to be creative
How to communicate clearly
How to learn
How to stay healthy
How to value yourself
How to live meaningfully
1. Maybe that's the true secret to our success.
Friday, October 05, 2007
Todd Biske works through the question of Service focus or product focus? He references several articles, providing several pieces of interesting reading material.
Something occurred to me while reading through the links.
Is it possible this is another form of the particles vs. waves duality?
Thursday, October 04, 2007
Wednesday, October 03, 2007
CIO Insight posted an interview of Bob Otto , the retiring CIO/CTO of the US Postal Service. I found the interview from a linked posted by Bob Gourley in How is the USPS like your IT enterprise?.
Bob Gourley quotes a segment that identifies Bob Otto's three guiding principles, summarized below in bullet form.
- Standardize everything
If you find a process you like, standardize it
- Centralize everything you can
If you have services in five different places and you can centralize them, you will have reliability, predictability
The computer has taken over your life, so I want it to be intuitive [for people to operate and manage]. I also test my own dog food.
These seem quite reasonable on the surface, but let's think about them for a moment or two.
Standardizing on a process you like? This presupposes that what you like is actually the best fit for the organization. I suppose one could assume that "like" includes this as an assumption. Hmm...
Centralize everything you can? This presupposes that all services are best delivered centrally. I agree that increased predictability is likely, but I question reliability and a small list of other potentially important attributes. Hmm...
Simplify? Yes - we have one point of agreement. Generally...
Still convinced these three principles are sounds? Perhaps they are true for some environments. A slow growth company in a mature market primarily requiring maintenance activities might benefit from these principles. Very few business, however, can assume these conditions.
Still convinced? Let's combine these principles and see where it might lead us.
We are newly appointed as CIO of example.com.
We discover a variety of seemingly duplicated services spread across the landscape. Hungry to show value to the business, we centralize them to achieve reliability and predictability. We even gain some economies of scale, so we get some cost savings to boot.
As part of the centralization, we decide one of the service implementations has the most efficient and effective processes. This becomes the standard process for all of the newly centralized services.
As we centralize, we notice several portions where some simplification can occur, so we do some process trimming. We've also eliminated some of the tasks performed by the once disparate services, so we're starting to see some dramatic moves towards simpler processes and systems. So far, so good.
We expect dramatic improvements. We might even expect some appreciation for our efforts.
For a time, we might actually receive the kudos for our accomplishments.
Then we notice a curious phenomena. The grumbling continues. Needs are not met. Changes are still required. Costs are still steep.
We investigate the situation.
We are shocked at what we find.
It turns out that our centralization bulldozing exercise cause some key functionality to get pitched overboard. The functionality confused the centralization task force, so they accidentally left it off the analysis spreadsheets.
We discover that our choice for most likable process as in fact the result of decisions made during a golf game. (the golf game immediately following one particularly frustrating day of difficult process discovery discussions).
Alas, the process simplification activities further exacerbated the problem. As it turns out, the process simplification team couldn't fit certain features into their model of a perfect world. (more 'inadvertant' deletions).
We also notice that our competitors have been watching our strategic moves, countering them with a strategy based on more balanced principles. We watch as our competitor eats us for lunch.
Tuesday, October 02, 2007
(Michael) Coté asks Is Google Stalking OK?
Outside of the HR/hiring context? Sure - you bet. That's half the fun of meeting people. :-)
As for potential issues regarding the practice within HR? That's an interesting question. There are limitations to what a potential employer can ask a previous employer, but I'm not sure if this limitation is applicable. There are legions of googleable offenses that don't fall under the purview of discrimination.
I wonder if this is a non-issue if done as part of a preliminary screening process with no interview taking place. Regardless, some organizations will likely want or need to consider updating their policy, if for no other reason than to feed the demons of risk avoidance.
Regardless, I'm sure we'll hear of associated lawsuits in the future. This seems inescapable.
"Yes, we're checking references - all the references to you we can find on Google."
Posted by Aloof Schipperke at 6:35 PM
Monday, October 01, 2007
Andrew Clifford has posted The dismantling of IT, in which he ponders the result of a wide scale trimming down of IT architectures.
No arguments on the concept here, but I'm still a bit puzzled by particular article in the series.
The essence, in my opinion, is the following statement,
The most obvious change is that the new architecture would remove technical layers, such as databases and middleware. These capabilities would of course still exist, but they could be standardised and hidden inside the systems. They would not need so much management, and we would need fewer specialists.This "still exist, they could be standardised and hidden inside the systems" seems no different than the waves of abstraction we have seen in the past.
Perhaps elaboration is in order.
I do not reject the concept because of a dislike for the idea of dismantling of traditional IT. Far from it... I simply do not see an avoidance of the fundamental problems mentioned in previous posts.
Have I missed a step?