Architecture and Planning

“Action without planning is folly but planning without action is futile.”

In this write-up, we explore the intimate connection between architecture and planning. At first blush, they seem to be completely separate disciplines. On closer examination, they appear to be twoArchitecture and Planning sides of the same coin. But in the final examination, we find that they are intimately intertwined but still separate and potentially independent. The motivation for this paper was an observation that much of our work deals with system planning of some variety. And yet, there is virtually nothing on a web site on this topic.

On one level that may be excusable. There is nothing drastically new about our brand of planning that distinguishes it from planning as it has been practiced for decades. On the other hand, system architectures typically are new and evolving and there are new observations to be made. But there’s more to it than that. We have so baked planning into our architectural work that we no longer notice that it’s there. This paper is the beginning of an attempt to extricate the planning and describe it as a sub discipline of its own. Are architecture and planning the same thing? Can we have one without the other? This is where we begin our discussion.

Certainly, we can have planning without architecture. Any trivial planning is done without architecture. We can plan a trip to the store or a vacation without dealing with architecture. We can even do a great deal of business planning, even system planning, as long as the implicit assumption is that the new projects will continue using any existing architecture. So certainly, we can have planning without architecture. But can we have architecture without planning? Well, certainly it’s possible to do some architectural work without planning.

There are two major ways this can come to be. One is that we can allow developers to develop whatever architecture they want without subjecting it to a planning process. The end product of this is the ad hoc or accidental changes that so characterize the as built architectures we find. The other way, which is as common, is to allow an architectural group to define an architecture without requiring that they determine how we get from where we are to where we want to be. Someone once said, “Action without planning is folly but planning without action is futile.” The architect who does architectural work without doing any planning is really just participating in an exercise in futility.

An intentional architecture requires a desired “to be” state, where some aspect of software development, maintenance or operation is better than it currently is. There are many potential aspects to the better state in the “to be” architecture: it could be less risky, it could be more productive, it could scale better, it could be more flexible, it could be easier for end-users to use, it could be more consistent, etc.

What they all share is that it is not the same as what exists now and in order to migrate from the “as is” to the “to be” requires planning. In the nineties, we seemed able to get away with a much more simplistic view of planning. “Rip and replace” was the order of the day once you determined what the target architecture looked like. Most organizations now have far too much invested in their legacy systems to contemplate a “rip and replace” strategy to improve either their architectures or their applications. As a result, the onus is on the architects to determine incremental strategies for shifting the existing architecture to the desired one. The company must continue to run through the potentially long transition period.

The constraints of the many interim stages of the evolving architecture and applications create many challenges for the planner. In some ways, it’s much like the widening of the heavily trafficked highway: it would be quite simple to widen it, if we could merely get all this traffic off of it but given that we can’t, there is often an extremely elaborate series of detours that each has to be planned, implemented and executed. In conclusion, I think we can see that architecture desperately needs planning. Indeed, the two are inseparable. While planning can certainly live on in the absence of architecture, architecture will not make any meaningful progress in any established company without an extreme commitment to planning.

By Dave McComb

What is Software Architecture and How to Select an Effective Architect

What is Software Architecture?

Originally published as What is Software Architecture on August 1, 2010

As Howard Roark pointed out in “The Fountainhead” the difference between an artist and an architect, is that an architect needs a client.

Software Architecture is the design of the major components of complex information systems, and how they interact to form a coherent whole. The identification of what constitutes the “major” components is primarily a matter of scale and scope, but generally starts at the enterprise level and works down through subsidiary levels.

The term “architecture” has been overused in the software industry to the extent that it is in danger of becoming meaningless. This is unfortunate, for it comes at a time when companies are in greatest need of some architectural direction. Architecture deals primarily with the specific configuration of technologies and applications in place and those desired to be in place in a particular institution.  While we often speak of a “client/server” architecture  or a “thin client” architecture, what we are really referring to is an architectural style, in much the same way that we would refer to “gothic” as a style of physical architecture, but the architecture itself only exists in particular buildings.

It isn’t architecture until it’s built

As Howard Roark pointed out in “The Fountainhead” the difference between an artist and an architect, is that an architect needs a client. Architecture, in the built world as well as the software world, generally only comes into play when the scale of the endeavor is such that an individual cannot execute it by themselves.

Generally, architecture is needed because of the scale of the problem to be solved. The phrase “it isn’t architecture until it’s built” refers to the difference between architects who draw drawings that may be interesting or attractive, but don’t result in structures being built have only participated in artwork, and not architecture.

Dealing with the “as-is”

Another area of confusion for the subject is the relationship between the “architecture” of a procured item, and the architecture of the environment in which it is implemented. We often speak of software with a “J2EE architecture,” and while it is true the framework has an architecture, the real architecture is the combination of the framework of the procured item with the host of components that exist in the implementation environment.

In the built world we may procure an elevator system, and this may be instrumental in the height and use of the building we design and build, and while the elevator system itself no doubt has architecture, we wouldn’t say that the building has an “elevator architecture.” This confusion of the procured parts with the architecture is what often leads people to short shrift their existing architecture. Sponsors may sense that their current architecture is inadequate and desire to replace it with something new.

However, unless they are in a position to completely eliminate the existing systems, they will be dealing with the architecture of the procured item as well as the incumbent one. All information systems have an architecture.  Many are accidental, but there is an organization of components and some way they operate together to address the goals of the organization.  Much as a remodeler will often employ an architect to document the “as built” and “as maintained” architecture before removing a bearing wall, remodelers of information systems would do well to do the same.

Architecture’s many layers

Architecture occurs in many layers or levels, but each is given by the context of the higher-level architecture. So, we can think of the architecture of the plumbing of a house. It has major components (pipes and traps and vents) and their interrelationship, but the architecture of the plumbing only makes sense in the context of the architecture of the dwelling.

It is the same in information systems. We can consider the error handling architecture, but only in the context of a broader architecture, such as Service Oriented or Client/Server. The real difference between and intentional and an accidental architecture is whether the layering was planned and executed top down, or whether the higher-level architectures just emerged from a bottom up process.

Beyond Building Materials

The software industry seems to equate building materials with architecture. We might talk about an architecture being “C++” or “Oracle” or “Object Oriented” (We’ve heard all these as answers to “what is your architecture?”).

But this confusion between what we build things out of and how we assemble the main pieces would never happen in the built world. No architect would say a building architecture was “brick” or “dry wall” or even “post and lintel,” even though they may use these items or techniques in their architecture.

Conclusion

No doubt there will continue to be confusion about architecture and what it means in the software world, but with a bit of discipline we may be able to revive the term and make it meaningful.

Who Needs Software Architecture?

Originally published as Who Needs Software Architecture on August 5, 2010

Most firms don’t need a new architecture.

They have an architecture and it works fine. If you live in a Tudor house, most of the time you don’t think about architecture, only when you want to make some pretty major changes. You might expect, given that we do software architecture and this is on our web site, that we would eventually try to construct this theme to say, well, nearly everyone sooner or later needs a software architect.

But that’s just not true. Most companies don’t need software architects and even those that do don’t need them most of the time. Let’s take a look at some of the situations where companies don’t need software architects.

Size

Small companies generally don’t need software architects. By small we mean companies of typically fewer than 100 people, however, this can vary quite a bit depending on the complexity of the information they need to process. If they are in any standard industry and if there exists packaged software which addresses their business needs, most small-business people would be far better off to adopt that package or the package of their choice and simply live with the architecture that comes with it.

For instance, in the restaurant industry now, there is a company called Squirrel that has by far the largest market share of the restaurant management applications. You can take orders on Squirrel, print out the receipts, take the credit cards, manage your inventory, schedule your wait people, cooks, busboys, and the like. For the most part, restaurant owners should not care what architecture Squirrel uses. It has an architecture but it’s not an important concern at that scale.

Stability

Larger companies most of the time will find themselves in a relatively stable state. They have a set of applications sitting on a set of platforms using a set of database management systems communicating via some networking protocol and communicating with some set of desktop or portable devices.

No matter how various it is, that is their current architecture and to the extent that it is stable and they are able to maintain it, make extensions to it and satisfy their needs, that is exactly what they should do and they should live within the architecture they’ve created, no matter how accidental the architectural creation process was.

It is really only where there are relatively complex systems, where the complexity is interfering with productivity or the ability to change and respond, or where major changes to the infrastructure are being contemplated, that companies should really consider undertaking architectural projects.

What Does a Software Architect Do?

Originally published as What Does a Software Architect Do? on August 11, 2010

The Software Architect’s primary job is to help a client understand deeply the architecture they have built, understand clearly what they desire from their information systems, and help them construct a plan to get there.

The simple answer of course is that the software architect or architectural firm creates the architecture.

The more involved question is, what goes into that and what process is typically followed to get to that result? The architecture, or as we sometimes refer to it, the “to-be” or target architecture, is an architecture that does not yet exist; and in that sense it is prescriptive. However, in order to define, articulate, draw, and envision a future architecture, we must start from where the client’s architecture currently is and work forward from there.

Divining the “As-is” Architecture

The client’s current architecture is a combination of a descriptive and visual representation of all the key components in the information infrastructure that currently exist. We have found time and time again that the mere inventorying, ordering, arranging, and presenting of this information has yielded tremendous benefit and insights to many clients.

Typically, the process involves reviewing whatever written documentation is available for the existing systems. Sometimes this is catalogued information such as a listing or repository of existing applications/technologies. Sometimes it’s research into the licensing of pieces of software. Sometimes it’s a review of diagrams: network diagrams, hardware schematics, etc.

The architect then interviews many of the key personnel: primarily technical personnel but also knowledgeable end-user personnel who interact with the systems and in many cases understand where other shadow IS systems live.

The end product of these interviews is a set of diagrams and summary documentation that show not only the major components but how they are interrelated. For instance, in some cases we have found it important to document the technical dependency relationships, which include the relationship of an application to the technologies in which it was created and therefore on which it is dependent. (See our article on technical dependencies for more detail in this area.)

Listening to the Stakeholders

The second set of inputs will come primarily from the business or user side of the organization. This will include interviews to establish not only what exists, and especially what exists that doesn’t work well, but also what is envisioned; what it is that the organization wishes to live into and is potentially hampered by in their existing information systems.

The real art comes in how we get from where we are now to where we want to be.

This is a very intense active listening activity in that we do not expect end users to be able to articulate architectural themes, needs, requirements, or anything of that nature. However, they do have the key raw material that is needed to construct the target architecture, which can be drawn out in conversation. The end product of this activity combined with what’s known from the current architecture is the first draft of what is called the target architecture or the to-be architecture. At this point the major themes, or styles if you will, are described and decided upon. It’s very much as if at this point the client is choosing between new urbanism or neo-modern styles of architecture.

Again, unless you know going in the style of architecture that will be required, it is best to work with architects who have a range of styles and capabilities. As the architects conceive of the overall architecture, they shift into a collaborative and consensus building mode with senior management and any other stakeholders that are essentially the owners of the long-term architecture. This process is not merely laying out the blueprints and describing them but is a longer ongoing process of describing themes, trade-offs, economics, manageability, and the like; trying out ideas and gathering feedback from the team. Again, active listening is employed to ensure that all concerns are heard and that, in the end, all participants are in agreement as to the overall direction.

Migration Plan

The real art comes in how we get from where we are now to where we want to be. First, the architects and management team need to discuss urgency, overall levels of effort, and related questions. Getting from an existing architecture to a future architecture very often resembles construction on a major arterial highway.

We all know it would be simpler and far more economical to shut down the highway for six months or a year and do all the improvements. But the fact is that most urban arterial roads are in very heavy use and shutting them down for efficient construction is not feasible, so the actual road construction project becomes a very clever series of detours, lane expansion, and, unfortunately, very often reworking the same piece of pavement multiple times. And so it is in the software industry.

Ten years ago, it was fashionable to “bulldoze the slums,” in other words, to launch massive projects that would essentially start from a green field and build all new systems that the owners could move into. There have been multiple problems with this approach over the years, the first being that the sheer size of these projects has had a dramatically negative impact on their success.

The second problem is that we are all, if you will, living in those slums; we are running our businesses with the existing systems and it is very often not feasible to tear them down in a wholesale fashion. So, one of the duties of the architects is to construct a series of incremental projects, each of which will move the architecture forward. At the same time, many, if not all, should be designed to provide some business benefit for the project itself.

This is easier said than done, but very often there is a backlog of projects that needs to be completed. These projects have ROI (return-on-investment) that has been documented and it is a matter of, perhaps, re-scoping, retargeting, or rearranging the project in a way that not only achieves its independent business function and return-on-investment but also advances the architecture.

Balancing Short Term and Long-Term Goals

This is an area that in the past has been sorely neglected. Each project has come along focused very narrowly on its short-term payoff. The net result has been a large series of projects that not only neglects the overall architecture but also continues to make it worse and worse, such that each subsequent project faces higher and higher hurdles of development productivity that it must overcome in order to achieve its payback.

When an overall plan and sequencing of projects has been agreed upon, which by the way often takes quite a significant amount of time, the plan is ready to be converted into what is more normally thought of as a long-range information system plan, where we begin to put high-level estimates on projects, define some of the resources, and the like.

That, in a nutshell, is what the software architect does. At the completion of this process, the client knows with a great deal of certainty where he’s headed to architecturally, why his destination architecture is superior to the architecture he currently has, and the benefits that will accrue once he is in that architecture.

And finally, he has a road map and timeline for getting from his current state to the desired state.

How to Select a Software Architect

Originally published as How to Select a Software Architect on August 31, 2010

Selecting a Software Architect is an important decision, as the resulting architecture will impact your information systems for a long time.

We present a few thoughts for you to keep in mind as you consider your decision. Assuming you have come to the conclusion that you can use the services of a software architect, the next question becomes, how do you select one? We’re going to suggest three major areas as the focus of your attention:

  • Experience
  • Prejudice
  • Chemistry

Experience

By experience we are not referring to the number of years of specific experience with a given technology. For instance, assuming that you did “know” somehow that your new architecture was going to be Java J2EE-based (though by the way, a decision like that would normally be part of the architectural planning process and it would often be detrimental to “know” this information going in). Even if you did know this information, it would not necessarily be beneficial to base your selection of an architect on it.

This would be akin to selecting your building architect based on the number of years of dry walling experience or landscaping experience that they had had. At the same time, you certainly do not want inexperienced architects. The architectural decisions are going to have wide-ranging implication for your systems for years to come, and you want to look for professionals that have a great depth of knowledge, and breadth of experience, of different companies and even of different industries that they can draw upon to form the conclusions that will be the basis for your architecture.

Prejudice

By prejudice we mean literally prejudgment. You would like to find an architect as free as possible from pre-determined opinions about the direction and composition of your architecture. There are many ways that prejudice creeps into architecture, some subtle and some not so subtle. For starters, hardware vendors and major software platform vendors have architects on staff who would be glad to help you with your architectural decisions. Keep in mind that most of them either overtly or perhaps more subtly are prejudiced to create a target architecture that prominently features their products, whether or not that is the best solution for your needs.

Other more subtle forms of prejudice come from firms with considerable depth of experience in a particular architecture. You may find firms with a great deal of experience with Enterprise JavaBeans or Microsoft Foundation Classes, and in each case, it would be quite unusual to find them designing an architecture that excluded the very things that they are familiar with. The final source of prejudice is with firms who use architecture as a loss leader to define and then subsequently bid on development projects. You do not really want your architecture defined by a firm whose primary motive is to use the architecture to define a series of future development projects.

Chemistry

The last criterion, chemistry, is perhaps the most elusive. We’re considering chemistry here because a great deal of what the architect must do to be successful is to elicit from the client, the client’s employees, potentially their customers and suppliers, and from existing systems their needs, aspirations, and constraints, and to hear that in full fidelity. For this to work well there must be a melding of the cultures or at least an ability to communicate forthrightly and frankly about these issues and really the only way to make this sort of determination is through interview and reference. The selection of the software architect is an important decision for most companies, as the creation of the architecture is likely to be the single most important decision that will affect future productivity as well as the ability to add and change functionality within a system.

A Semantic Enterprise Architecture

We do enterprise architectures, service-oriented architectures, and semantics. I suppose it was just a matter of time until we put them together. This essay is a first look at what a semantic enterprise architecture might look like.

What problem are we trying to solve?

There are several problems we would like to address with semantic architecture. Some of them are existing problems that are just not solved well enough yet. Even though many organizations intellectually understand what a service-oriented architecture is and what it might do for them, the vast majority remain unmotivated to invest in the journey to migrate in that direction.

Certainly we want the semantic enterprise architecture to address the same sorts of situations that the service-oriented architecture can handle, such as the ability to rapidly adapt new applications; to swap out technologies and applications at will; and to do so with commodity priced services and technologies. But let’s also focus on what additional problems a semantic enterprise architecture could address.

Dark matter and dark energy

As strange as it may seem there is mounting evidence that the knowable and visible universe, that is, the earth, the sun, the planets, the stars, the galaxies, and all the interplanetary andEnterprise Architecture: dark matter and dark energy intergalactic dust represent some four to five percent of  the total amount of “stuff” that there is in the universe. We live in the froth of a giant wave whose presence we can only infer.

What makes up the rest of the universe is what physicists now call dark matter and dark energy. Dark matter makes up some 25% of the mass and energy deficit of the universe and is the primary force holding galaxies together, as the gravitational attraction of the masses of many stars and black holes is insufficient to do the job on its own.

Rather than the universe’s expansion slowing down since the big bang as the amount of mass and dark matter would suggest, apparently the universe is moving apart at an even faster rate. The propelling force has been dubbed “dark energy” and it comprises the remaining 70% of all the “stuff” in the universe. And so it is with our corporate information systems. We fret continuously over our SAP databases or the corporate data warehouse or the integrated database system that we run our company with, but this is very much like the five percent of the universe that we can perceive. It’s comfortable to believe that’s all there is.

But it just doesn’t square with the facts. Rogue applications, such as those built in Microsoft Access, Excel or FileMaker are the Dark Matter of an information system.  Like the cosmic Dark Matter, in some fashion they are holding an enterprise together, even though most of the time we can’t see them. Messages and documents are our Dark Energy equivalent: they are the expansive force in the enterprise. And like Dark Energy in the Universe, they are undetected by the casual observer.

What Does Enterprise Architecture Have to do with Semantics?

We ignore our information dark energy and information dark matter largely because at a corporate level we literally do not understand them. In our corporate databases we’ve invested decades of effort and typically millions and usually tens and hundreds of millions of dollars of implementation, standardization, training, and documentation in an attempt to arrive at a shared meaning for the corporate information systems. As we’ll discuss in other white papers, this has still left a great deal of room for improvement. Indeed, most of the meaning is shared only within an individual application.

Occasionally, corporations invest mightily in ad hoc semantic sharing between applications, under the guise of Systems Integration. But what we’re going to talk about today is the information dark matter and information dark energy and bringing them into the light. With the rogue systems, what we need to know is: what is the correspondence, if any, (and, by the way, it is usually considerable) between the rogue systems and the approved systems. More often than not, a rogue system is populated from either extracts or manual creation of data from approved systems. The rogue system is often created in order to make some additional distinctions or provide additional behavior or extensions that were not possible in the approved system. But this does not mean they don’t have a shared root and some shared meaning.

Occasionally, rogue systems are created to deal with some aspect of the corporation that, at least initially, appeared to be completely outside the scope of any existing application. You may have a videotape renting application or a phone number change request application or any of a number of small special-purpose systems. However, if they become successful and if they grow, inevitably they begin to touch aspects of the company that are covered by officially sanctioned systems.

What we want to do with semantics is to find and define where the commonalities lie in such a way that we may be able to take advantage of them in the future. For the unstructured data we have an even bigger challenge. With the rogue system, once we deduce what a column in the Access database means, we have a reasonable prediction of what we are going to find in each change record. This is because the Access database, while it may not be as rigorous as the corporate database, provides structure and validation.

Not so with the unstructured data. With the unstructured data, we need to find ways to find and organize meaning where every instance may be different. Every email, every memo, every reference document contains different information. The semantic challenge is to find, wherever possible, references in this unstructured data to information that is known at a corporate level. The approach here is almost exactly the opposite: in documents, people rarely refer to what we think of as meta-data or categories or classes or columns or entities — or anything like that. In documents, people refer to specific instances. They may refer to their order number in an email; they may refer to a set of specific codes in a reference manual; they may refer to a particular procedure in procedure manual. Our semantic challenge in this case is to find these items, index them, and associate them to the meta-data and even the instances that exist at a corporate level.

Semantic Enterprise Architecture

So what’s in the architecture? The Sematic Enterprise Architecture is still primarily based on Service Oriented Architecture concepts. We want to be able to communicate between largely independent applications and services using well-defined and corporate standard messages. These messages should be produced and consumed in such a way that allows at least some local change in their structure and syntax without breaking the rest of the architecture.

But we need to go considerably beyond that. We will need a meta-data repository that links the enterprise’s shared schema with a more generalized and, at the same time, more precise description of what these things mean. This meta-data repository will be populated by a combination of machine and human inferences from the description of the meta-data that exists in the many dictionaries and documentation bits as well as from the product of data profiling. Data profiling in the corporate systems will tell us not what we intended the data in our corporate systems to mean but in practice, from how we have been using the system, what it has come to mean.

This expression of the enterprise meta-data in a rigorous format is just the beginning or the gateway for incorporating our dark energy and dark matter. The rogue systems need to have their meta-data catalogued in a compatible fashion to the enterprise meta-data repository. This will allow us at least to know when the corporate systems may want to refer to the rogue system for additional details. Conversely, it creates at least some hope that the rogue system may have a defined interface to the corporate system and may be informed if things change.

The unstructured data will be incorporated using technologies that already exist, including text interpretation, first to find any specific nouns or events that are called out from the unstructured data. Using this information, the unstructured data can be cross-referenced to instances and by extension into the entire enterprise data network.

How to Get Started

This all sounds a bit incredible. And the endgame is likely a ways off. But we don’t have to go to the endgame for this to be useful. As Jim Hendler says, “A little bit of semantics goes a long way.” Even if we only pick a few topics to index in our meta-data repository, and even if we choose a few well-known rogue applications to cross-reference, and even if we only grab the low hanging fruit from our unstructured data, as many companies are already doing, we will still see considerable benefit.

Many content management and some knowledge management projects are aimed at using humans to perform this style of indexing on the reference documents that we often use in our organizations. But with a little extension this can go considerably farther. As it is, it’s generally an island unto itself. But as we’re proposing here the island can be extended and incorporated into the broader enterprise landscape.

Concluding Thoughts

We’ve barely scratched the surface here. However, many of the technologies needed to make this work already exist and have been proven in isolated settings. What are needed are companies willing to invest in the research and infrastructure in order to profitably include the other 95% of their information infrastructure into their enterprise architecture.

The Return on Investment (ROI) for Architecture

In many organizations, ROI is a euphemism for quick fix or short term payout projects.

I’ve come across a couple of articles recently on either the difficulty or the impossibility of constructing a Return on Investment (ROI) analysis on enterprise architecture projects, which, of course, we would take to also include service oriented architecture projects. While I agree with many of the authors’ points and frustrations, particularly in organizations with a policy of ROI-based project approval, I don’t completely agree with the assessment that ROI is not possible. The difficulties in constructing an ROI for architectural projects lie in two major areas: timeframe and metrics.

Timeframe Issues

In many organizations, ROI is a euphemism for quick fix or short term payout projects. It’s not so much that an architectural project does not have a return on investment but typically the investment is considerably longer than befits a quick fix project. Organizations that overemphasize ROI are typically addicted to short term solutions. While a great deal of good may be done under a shorter time horizon, there is also often a dark side to them. That dark side is that the quick fix is often implemented at the expense of the architecture. What remains after a long period of time is often an architecture of many arbitrary pieces. As a result, each succeeding project becomes harder to accomplish and maintenance costs go up and up. However, these weren’t factors in the ROI for the project and as a result the project approval scooted on through. Indeed, if there were no return on investment for an architectural project one would argue that we shouldn’t do it at all; that if we were going to spend more than we would ever save, that we should just save ourselves the headache and not do it. However, I and most people who give it serious thought recognize that there is a great deal of payoff in rationalizing one’s architecture, and that the payout occurs over a long period of time in increased adaptability, reduced maintenance, additional reuse, etc.

The ROI Denominator

Another piece of the ROI equation is the denominator. In other words, return on investment is what benefit or what return you got divided by the investment. One of the difficulties in justifying some architectural projects is that the denominator is just too large. Some architectural projects will overwhelm and outweigh any conceivable benefit. However, these projects do not have to be large and costly to be effective. Indeed, we find that the best combination is a fairly modest architectural planning project, which then uses monies that would have been spent anyway, supplemented with small bits of seed money, to grow the architecture in a desired direction incrementally over a large period of time. Not only does this reduce the denominator, more importantly, it reduces the risk because with any large infrastructure-driven architectural project there’s not only a large investment to recoup but there’s always the risk that the architecture might not work at all.

Getting the Right Metrics

The final problem, even after 50 years of software development, is that we’re still not routinely collecting the metrics we need to make a rational decision in this area. Sure we have a few very gross metrics, such as percent of IT spending to sales or proportion of new development versus maintenance; and we have some low-level metrics such as cost per line of code or cost per function point. But neither of these are much help at all in shining light on the kinds of changes that will make a great economic difference. In order to do this, we now need to look at the numerator of the ROI equation. The numerator will consist primarily of benefits expressed as cost savings relative to the current baseline. Within that numerator we will divide the activity into the following categories:

  • Operational costs
  • User costs
  • Maintenance costs
  • Opportunity costs

Operational Costs

Operational costs are the total costs required to keep the information systems running. This includes hardware costs: leases and amortization of purchased costs; software licensing costs: onetime costs and especially annual support costs. This also includes all the administrative costs that must be borne to support the systems, such as backup, indexing, and the like. In addition, we should include the costs required for direct support such as help desk support, which is required just so that users can continue to use the software systems as intended. This category of costs is relatively easy to get in the aggregate. In other words, from your financial statements you probably have a pretty good idea of these costs in total. In many organizations it’s more difficult to break these costs into smaller units, where the investment analysis really occurs. This includes breaking them down into application level costs as well as cost per unit of useful work, which we will talk about more toward the end of this article. An enterprise architecture can very often have dramatic impacts on operational costs. Some of the more obvious include shifting production from inappropriate hardware and operating system platforms. For example, in many cases mainframe data centers are cost disadvantaged relative to newer technology. Also very often the architecture can help with the process of rationalizing vendor licenses by consolidating database management systems or application vendors. There can be considerable savings in that area. An architecture project may be targeted at changing an unfavorable outsourcing arrangement or conversely, introducing outsourcing if it’s economically sensible. The longer run goals of service oriented architecture are to make as many as possible of the provided services into commodities with the specific intention of driving down the cost per unit of work. Without the architecture in place your switching costs are high enough that, for most shops, it’s very difficult to work a process of ratcheting down from higher to lower cost suppliers.

User Costs

By user costs we mean the amount of time spent by users of the system over and above the absolute bare minimum that would be required to complete a job function. So if a job requires hopping from system to system, navigating, copying data from one system to another or from a report into a system, transcribing, studying, etc., all of this activity is excess cost created by the logistics of using the system. These costs are much harder to gather because they are spread throughout the organization and there is no routine collection of the amount of time spent on these activities versus other non system mediated activities. Typically, what’s needed in this area is some form of activity-based costing, where you can audit on a sampling basis how people spend their time and compare that against “should cost” analysis of the same tasks. Even when the task has been off loaded to the end user, what’s called “self-service,” it still may be worthwhile to review how much excess time is being used. In this case, it’s not so much a measure of the resources lost from the organization but it may be an indicator that competitors may be able to take advantage of an effort difference and steal customers. Many aspects of service oriented architecture are aimed exactly at this category of costs. Certainly, all the composite application, much of the system integration, and the like, is aimed at reducing non value-added time that the end users spend with their systems. Implementing service oriented architecture as well as workflow or business process management or even process orchestration can be aimed directly at these costs.

Maintenance Costs

These are the costs of keeping an information system in working order. It includes breakdown maintenance, which is fixing problems that occur in production; preventative maintenance, which is rarely done in the information industry but would include re-factoring; and reactive maintenance, which is maintenance that is required due to changes in the environment. This category includes changes to the technical environment, such as when an operating system is discontinued and we’re forced to maintain an application to keep it running, as well as changes in the regulatory environment where a law changes and we are forced to make our systems comply. I did not include proactive maintenance or maintenance that is improving the user or business experience in this category as I will include them in the opportunity cost category. Maintenance costs are typically a function, not so much of the change to be made, but of the complexity of the thing to which the change is being applied. Most maintenance to mature systems involves relatively small numbers of lines of code. Especially when we exclude changes that are meant to improve the system we find fewer and fewer lines of code for any given maintenance activity. That’s not to say that maintenance isn’t consuming a lot of time; it is. Maintenance very often involves a great deal of analysis to pinpoint either where the operational problem is or where changes need to be made to comply with environmental change. Once the change site is identified another large amount of analysis needs to be done to determine the impact the change is likely to have. Unfortunately, the trend in the nineties to larger integrated systems essentially meant larger scope to search the problem and larger scope for the change analysis. The other major difficulty with getting metrics on maintenance is that many architectural changes eliminate the cost so effectively that people no longer recognize that they are saving money. One architectural principle that we used in the mid-nineties was called “riding lightly on the operating environment.” We argued that a system could have the smallest possible footprint onto the operating system’s API. In many ways this is the opposite way many applications are built. Many application developers try to get the maximum use out of their operating environment, which makes sense in the short term development productivity equation, but as we discovered the fewer points of contact you have with the operating system the more immune you are to changes in the operating system. As a result, systems we’ve built in that architecture survived multiple operating system upgrades, in many cases without even a recompile and in other cases with only a simple recompile or a few lines of code changed. The well-designed enterprise architecture goes far beyond that in the realm of reducing long-term maintenance costs. In the first place, the emphasis on modularity, partitioning, and loose coupling means that there are fewer places to look for the source of a problem, there is less investigation to do for the side effect, and any extremely problematic area can just be replaced. In this area, we will likely have to calculate the total cost per environmental change event, such as the cost to upgrade a version of a server operating system or, as we recently have history with, the cost when the environment changed at Y2K and two digit years were no longer sufficient.

Opportunity Costs

In this category I’m putting both the current cost to make proactive changes – in other words, cost to improve the system to deliver better information with fewer keystrokes – as well as the cost of the opportunity lost for not making a change because it was too difficult. The architecture can drastically improve both of these measures. In the first case, it’s relatively straightforward to get the total cost of proactive changes to current systems. However, we likely need to go much beyond that and look at change by types. For instance, what is the cost of adding additional information into a use case?  What is the cost of streamlining three steps in the use case into one?  Perhaps someone will come up with a good taxonomy in this area that would give some comparable values across companies. We also have to include the non-IS costs that go into making these kinds of changes. Not only does this include the business analyst’s design time but if it’s currently possible for analysts to make a change on their own, we should count that as development time. In the longer run, we expect end user groups to make many of these kinds of changes on their own; and indeed, this is one of the major goals of some of the components of an enterprise architecture. The other side, the opportunity lost, is very difficult to measure and can only be done by interviewing and guesstimate. But it’s very true that many companies miss opportunities to enter new markets, to improve processes, etc., because their existing systems are so rigid, expensive, and hard to maintain that they simply don’t bother. Also in this category are the costs of delay. If a proposed change would have a benefit but people delay, for often times years, in order to get the budgetary approval to make the change that’s necessary, this puts off the benefit stream potentially by years. With a modern architecture, very often the change requires no capital investment and therefore could be implemented as soon as it’s been well articulated.

Putting It All Together

An enterprise architecture project can have a substantial return on investment. Indeed, it’s often hard to imagine things that could have a larger return on investment. The real question is whether the organization feels compelled to calculate the ROI. Most organizations that succeed with enterprise architecture initiatives, in my observation, have done so without rigorous ROI analyses. For those that need the comfort of an ROI, there is hope. But it comes at a cost. That cost is the need to get, in the manner we’ve described in this white paper, a rigorous baseline of the current costs and what causes them. Armed with this you can make very intelligent decisions about changes to your architecture that are directly targeted at changing these metrics. In particular, I think people will find that the change that has the biggest multiplier effect is literally changing the cost of change.

Written by Dave McComb

Schools of Enterprise Application Architecture

An Enterprise Application Architecture is the coordinating paradigm through which an organization’s IT assets inter-relate to form the computing infrastructure and support the business goals.

The choice of architecture impacts a range of issues including the cost of maintenance, the cost of development, security, and the availability of timely information.

Introduction

Architecture in the physical world often conforms to a style. For the most part we know what to expect when we hear a building described as ‘Tudor’ or ‘Victorian.’ All this really means is that the builder and architect have agreed to incorporate certain features into their design which are representative of a given school of thought for which we have a label. Similarly, there are schools of thought in Enterprise Architecture which when followed produce equally distinctive architectural results. This paper is an overview of the more prominent of these schools.

The Default Architecture

Imagine an enterprise – and for many of us this might be quite easy – in which no unifying discipline is applied to the application development and design process. Applications are created as needs are discovered, their implementation is directed by that part of the organization providing the budget, and their scope is constrained by the organizational unit within which they are conceived. The applications themselves perhaps go through a rigorous and well understood development process which ends, often as an afterthought, with an integration task in which the shiny new application is plugged into the rest of the enterprise.

Imagine, further, that we are now looking at this enterprise after this approach to application development has been practiced for years, and perhaps decades. What we will see is an evolutionary enterprise architecture in which the impact of non-technical issues, such as the personalities of the managers and the distribution of budgets between lines of business, is clearly visible in the legacy fossil record. The architecture will be a collection of seemingly arbitrarily defined and scoped applications tightly coupled to each other through hand crafted point-to-point interfaces. The problems with this architecture become evident as the enterprise grows. In theory, if every application has to share data with every other application, then the number of interfaces will approximate n(n-1)/2, where n is the number of applications.

This is an O(n2) growth in complexity, and is consequently a point of failure as the enterprise infrastructure grows, and n becomes large. In reality, of course, all of the applications do not share data with all the other applications, which suggests the increase in complexity is not necessarily exponential, but it is nonetheless a problem. John Zachman, creator of the Zachman framework, describes what occurs when we create applications in this manner as ‘post integration.’ The difficulty with post-integration, as he points out, is with semantic consistency. It becomes increasingly difficult to make sure that what we mean by a piece of data in one application is what we mean by that same piece of data in a different system which has received the data through one or more interfaces.

Controlling, and indeed just discovering, the semantics of the data is a difficult undertaking with this architecture, and without a clear definition of the semantics virtually nothing can be done with confidence. Poorly controlled semantics, of course, is exacerbated by another characteristic of the default architecture, which is uncontrolled data replication. Our multiple applications each have their own database, with their own copies of data, which they interpret in their own way. The replicated copies of data, in turn, rapidly become out of synch with each other, leading to an environment in which data meaning, currency and validity are all uncertain.

A fundamental problem with the default architecture is application coupling, by which we mean that a change to any application will have a scope of effect beyond that application. The enterprise applications are all tangled together, a bit like a ball of string. This means that changes that should logically be simple, localized and cheap end up as complex, broad ranging and expensive. The problems with the default architecture are manageable in small enterprises. It is only as the enterprise becomes larger that they become impossible to control. This paper is an overview of the successive approaches which enterprises have employed in response to this issue.

The Integrated Database Architecture

With the advent of the scaleable relational database in the mid 1980s enterprises saw a possible solution to the complexity created by the default architecture. The theory was quite compelling; the enterprise should have a single, large database which implements an enterprise wide conceptual data model. The semantics of the data would be centrally defined and under tight control. All of the application logic would operate off of the single data store, and its central definition.

Consequently there would be no growth in the number of interfaces, because there would be no need to have interfaces. There would be no data consistency or replication problems, because there would only be one copy of the data. Constraints in the database would require data to be collected completely by all parts of the application logic, and to be consistent. This was a seemingly perfect solution which most enterprises embraced enthusiastically.

The problems with the integrated database architecture did not appear immediately. The primary problem, in fact, is one of change over time. The integrated database is a great point in time solution. The problem is that once we have all of our applications based on this single database we have created, in programming terminology, a single giant ‘global variable,’ which, if changed in any way, has a potential effect everywhere. In other words the integrated database gives us tremendous data integration at the cost of extremely tight application coupling.

If, for example, we wish to change the logic of an application, perhaps to send our customers birthday cards, we are changing the same data structure – the database – which we are using to run our mission critical systems, and we are potentially also having to change those mission critical systems even though they do not care at all about birthday cards. So, at the end of the day, we are more likely than not to decide that we won’t change the mission critical systems, for reasons of risk and cost, and that we will rather forgo the birthday card function. The tightly coupled integrated database, then, has a flexibility problem.

Business processes change over time, and each change potentially impacts all of our applications. This means that making any single change is disproportionately expensive, and tends to be resisted, producing a non-responsive IT support infrastructure. When the business cannot change its core systems cost effectively, enterprising business users and IT managers will typically conclude that the obvious answer is to build ‘their own’ little system in parallel to the integrated database and then build an interface. This can often look like a constellation of Microsoft Access databases circling a mainframe, and performing both pre and post processing to support the business’ actual processes. In due course the peripheral applications become as important as the integrated database applications, and eventually the integrated database architecture begins to look like the default architecture, with one or more anomalously large applications. In the end, the integrated database architecture fails because it cannot inexpensively accommodate rapidly changing business processes, which are a hallmark of the modern enterprise.

The Distributed Object Architecture

The arrival of Object Orientation in the early 1990s heralded yet another approach to enterprise architecture. This approach said, in effect, that the problem with the integrated database architecture was one of programming. In order to create an application a developer would have to understand this large complex schema, and would then create logic to manipulate it. This logic, or behavior, defined for the data does, in effect, define the semantics of that data. Having developers re-define logic each time for core bits of functionality creates a ‘semantic drift,’ where the actually implemented behavior, from application to application, is inconsistent.

The distributed object architecture is a discipline which requires the enterprise to create an object representation of its core concepts, such as Customer, Order, and so on. When developers create an application they do so by invoking this predefined behavior, thus ensuring semantic equivalence between applications. The distributed object architecture is attractive in so far as the object analysis process extends logically from the Semantic Model, and leads to a centrally defined and controlled definition of data semantics and process. It is clearly an improvement over straight procedural logic sitting on top of a global database schema, as in the Integrated Database architecture, but it is only an incremental improvement.

The core limitations of the Integrated Database architecture, namely tight coupling and inflexibility, live on in the Distributed Object Architecture, which is not a surprise given that this approach is really nothing more than an object veneer over the integrated database. The distributed object architecture is implemented in a variety of technologies, including Enterprise Java Beans, Microsoft DCOM, and the Common Object Request Broker (CORBA).

The Message Bus Architecture

Flexibility has become one of the most important qualities of enterprise application architectures. ‘Flexibility’ is the capacity to change elements of the architecture at acceptable cost. The key to creating a flexible architecture is to decouple the independent pieces from one another, such that a change to one of the pieces does not unnecessarily require a change in any of the other independent pieces.

This capability is what has been missing from the prior architectures, and this is the primary contribution of the Message Bus Architecture. The message bus architecture returns us to an environment of independent applications maintaining their own databases. We add to this (typically) an ‘integration broker’, which is broadly responsible for communicating data between applications. The data communicated in this way is referred to as a message. By introducing the message broker as an intermediary, we are able to decouple applications from one another. Semantic consistency is enforced by representing the enterprise conceptual data model as a message model, or a centrally controlled message schema.

The n(n-1)/2 point to point to point interfaces are replaced by n interfaces – one from each application to the broker. We necessarily introduce a degree of data replication, but we control the replication through a change notification mechanism provided by the broker, typically in the form of publish/subscribe messages.

This controlled replication manages the data consistency issues, while at the same time creating a degree of ‘runtime decoupling,’ which allows the independent applications to operate even though other parts of the infrastructure may be unavailable. In this environment applications can be implemented in any technology and using whatever database schema they choose.

Their obligation to the enterprise is to generate a set of defined messages conforming to the message model, and to process incoming messages. They are free to change as and when they wish, as long as they continue to support their message contract. This is what is meant by decoupling, and this is where flexibility originates.

In a Message Bus architecture the message routing function, that is to say the logic which controls where a message is delivered, can be centralized in the broker without a loss of decoupling. When this is done, it becomes possible to see the message broker as a business process management (BPM) tool, and as a means of implementing enterprise wide workflow through the addition of rules.  When fully supported by the applications, the Message Bus Architecture allows the implementation of the ‘real time enterprise,’ in which all business events, regardless of origin, appear on the message bus and can be consumed by any interested application.

This can become especially interesting when events generated by external business partners reach the internal message bus, and vice-versa. The Message Bus Architecture requires careful implementation to provide true decoupling and flexibility. It is quite possible to create a network of point-to-point logical interfaces over the single technology interface to the broker. This occurs when applications create ad-hoc messages for every integration case; the solution here is to proactively architect the message model and ensure that it is not circumvented at the application level. The Message Bus Architecture is not, however, a complete solution.

At the technology level it is usually implemented with proprietary technology for the message broker, which is expensive to buy, and requires scarce and equally expensive personnel to use. The distributed nature of the solution necessarily creates multiple points of failure – which can be mitigated through careful design to maximize runtime decoupling – and one central point of failure in the integration broker. Performance issues are also a potential problem; poor application partitioning can create excessively high volumes of messages, and some use cases can be impacted through high network latency. The Message Bus Architecture is a viable solution, but it is not a trivial implementation.

Service Oriented

The Service Oriented Architecture is a refinement on the Message Bus Architecture. The advance with this architecture is the realization that many large granularity functions are automated in the enterprise in multiple places.

Many of our applications will do reporting, most applications will implement a user interface, most will concern themselves with security, and most will implement some form of business logic, and so on. The Service Oriented Architecture posits that the applications should be refactored and these pieces of functionality should be removed from the applications and implemented as a single ‘service’ which can be invoked at runtime. So, for example, reporting becomes a responsibility of the Information Delivery service, which might be implemented through a data warehouse, the user interface might be delegated to a portal service, the security functions will be implemented by an authentication and authorization service, and business logic perhaps by a business rules service.

The likely candidates for service orientation tend to be business neutral, in part because these functions appear repeatedly across the application inventory. The trade off of creating a service is that we have potentially created runtime-coupling between the service and the invoking application, and consequently created a point of failure. The benefit is the reduction of redundant functionality, and its central control and unification.

Taken to a logical extreme Service Orientation will allow applications to divest themselves of the responsibility for security, business logic, workflow and presentation, leaving very little beyond data store and configuration. Service Orientation can be implemented in a messaging environment using a broker, however this is not a requirement. Much of the current literature confuses the implementation technology with the concept, especially where the implementation technology is Web services.

The Orchestrated Web-Service Architecture

The latest technology trend is Web services. Web services are positioned to become the open standards based implementation of the Message Bus Architecture. Where applications currently communicate with the Message Bus using a vendor proprietary adaptor we will have a standard Web service interface instead.

Where the Message Bus Architecture performs message routing using a proprietary extensional routing – or orchestration – tool, or using intentional publish/subscribe logic, the web-service architecture will use the corresponding web service standard, at present BPEL4WS. Where the Message Bus Architecture implements guaranteed delivery through proprietary queuing mechanisms such as IBM-MQSeries and others, the Web service architecture will use upcoming standards such as HTTP-R or Web services reliable messaging.

The Web services standards are currently incomplete, and don’t fully overlap the proprietary products offerings, however the promise is clearly that in the near future Web services will offer an open standards alternative. Web services are by nature point-to point connections. Used naively this will create a technically state of the art implementation of the Default Architecture, with applications tightly bound to each other through many uncontrolled interfaces. The Orchestrated Web service architecture, consequently, introduces a broker to which all Web service calls are made, and which is responsible for forwarding those requests to the applications providing the service.

This centralized orchestration is what allows the Web service approach to remain decoupled. Similarly, by implementation asynchronous request/reply logic – which is to say the requestor does not block waiting for the reply – and by supplementing the standard Web service call over HTTP with guaranteed delivery, the broker is able to create an environment which is similar to that of the Message Bus Architecture. The Web service architecture is practical today, supplemented with various proprietary technologies. It represents an improvement over the Message Bus Architecture by being based on open standards and consequently reducing vendor lock-in.

The Just-in-time Integration Architecture

One of the interesting capabilities which the Web service technologies introduced is the concept of runtime discovery. The UDDI Web service specification allows an application to find a service at runtime, to bind to it, and invoke it. The client application searches for the service based on service categorization and conformance to an interface specification – in this case a WSDL document.

This capability allows us to conceive of an architecture in which Applications and Services create web-service interfaces and place their WSDL descriptions in the enterprise UDDI repository. When an application wishes to invoke a service it looks it up in the repository and invokes it. The key benefit of this approach is that the inter-application binding is entirely dynamic and consequently decoupled; we can replace the service provider at any time simply by changing its entry in the UDDI repository. With this approach there is no broker, and consequently there are no centrally provided management and control functions. However, in a decentralized internet based situation this maybe an appropriate architectural choice.

Conclusion

The choice of Enterprise Application architecture is critical to creating a successful IT infrastructure which is responsive to the business needs and which reinforces the qualities which are of value to the organization. All of the schools of architecture which are described here can be valid choices, just as building a Victorian style house is as legitimate a decision as building in a Tudor style. It is the responsibility of the architect, however, to ensure that the chosen architecture is appropriate for its environment. Although these architectural schools are evolving, and new ones are being created, most enterprises are clearly in a position to benefit from the adoption of a defined enterprise application architecture.

Written by Dave McComb

Role-Based Security

Role-based security is a means of implementing an authorization mechanism which has the potential to substantially reduce administrative cost and reduce vulnerability. Enterprise Role-based security addresses the problemrole-based security of maintaining authorizations within large IT environments; it is perhaps more accurately described as ‘Role-Based Access Control,’ or RBAC.

Application-Based Authorization

Authorization is the process through which a person is granted permission to invoke behavior, see, create, delete or update data, for one or more systems. The system is responsible for enforcing the permissions granted in the authorization process, and consequently, systems use a variety of techniques to support the authorization process. The most basic approach is to represent people in the system as ‘users,’ and then to enforce which users can perform what functions in the code.

So we end up with logic which looks like: ‘if user is Simon then deny…,’ and so on. The problem with this approach is that the ‘user’ concept really becomes a proxy for whichever employee is performing the function, and not an actually person. The user, in other words, is shared, with a corresponding security risk. Additionally, what the user can do is hard coded in the system, and difficult to change.

Introducing Roles

At the application level, role-based security addresses this problem by accepting that the ‘shared user,’ is really a role in the enterprise which multiple employees will perform. So, for example, a role might be ‘sales person,’ or ‘accountant.’ With roles made explicit in this manner we can then use the idea of a ‘User’ to be a proxy for an individual, with security information such as passwords specified for that person. The authorization process now becomes a matter of assigning one or more roles to a given user.

If the person that user represents is replaced by a different employee, then we delete the original user – or remove his or her roles – and create a new user for the new person. Many implementations of role-based security will choose to express in the system’s code the authority which has been granted to a specific role. This approach is supported directly in Microsoft’s .NET framework, where metadata can be used to flag functions with the required roles. This approach has the downside of being quite inflexible. Once the system has been created it is no longer a cheap proposition to change what your sales person role is permitted to see or do, which, given a dynamic business process, is not ideal.

The alternative to this approach is to represent the results of the authorization process as data in the system. This data is known as authorizations, permissions, entitlements, or provisions. A user can now perform a function if he possesses the required permission, through his role memberships. With this approach we have the ability to dynamically redefine the semantics of role membership as our business evolves.

Enterprise Authorization

Although application level role-based security is certainly a useful technique, the full benefit of this concept only appears in the enterprise-wide context. As things stand today most systems – whether they implement a role-based approach or not – contain their users’ authorizations in their own table structures, and have their own user interfaces for creating and removing users, and for defining authorizations. Consequently setting up new users and assigning them their authorizations is a system by system administrative task.

This is clearly a problem as the number of systems and users becomes large, or the turnover in users becomes high. Furthermore, there is an inherent security risk in this approach; since the authorizations for a given user are dispersed across multiple systems it becomes difficult to determine which authorizations a given user actually has. Diagrammatically, the current situation looks as depicted in figure 1, where we have three users in four systems, each with multiple entitlements (not drawn). If we wish, for example, to prevent our accountants from authorizing payments we would have to remove that entitlement in each system for each user, giving a total of 12 changes. Imagine a more representative environment of 50-100 systems with thousands of users. And if we miss one of those changes we have a security failure.

Enterprise Role-Based Authorization

Enterprise Role-based security is a solution to this problem. It introduces the idea of an ‘enterprise role,’ which, like the application role, represents a responsibility an employee might have in an enterprise, or, more broadly, a relationship a person has with the enterprise. A position in the organization’s hierarchy will have some set of responsibilities, and can consequently be described with a collection of enterprise roles. Defining the roles in an organization is an exercise in analysis which we call ‘Role Engineering,’ and is perhaps the most significant aspect of implementing rolebased security. Once we have the enterprise roles defined we can then associate our users with their roles, and the roles with authorizations, as depicted in Figure 2. Conceptually this approach allows us to add and remove authorizations from users with much more rigor; we no longer make security changes to individuals but to roles. By doing so we can be clear as to what people can do across our systems by virtue of their membership in certain roles. We would prefer not to associate authorizations with users directly, which should be considered an exception mechanism which complicates our overall gestalt of our granted authorizations.

Role-Based Security and the Enterprise Architecture

The structure described here can be implemented centrally and interfaced to the systems in the environment using, for example, a messaging infrastructure. This approach creates a true enterprise-wide role-based security implementation, and creates tremendous savings in administrative effort. Now, using our previous Accountant example, the revocation of an authorization requires a single change to one role, rather than 12 changes across four systems. Similarly, hiring employees is a simple matter of associating a new user with the roles defined for the job position, and firing employees is a matter of deleting a user. We now have a single, explicit definition of what a user can do in our systems; there is no longer redundant administration and consequently the possibility for security failures in this area is reduced. When we have an enterprise role based security service, as described here, we have to interface that service to the systems which are going to use it. The client systems should ideally receive from the service an abstract description of the authorizations a user has. The natural means of doing this is to create a model for authentications, and express that model using a standard XML dialect. Alternately, the service can provide the client systems with the users’ roles. The system will then maintain the authorizations for that role. This is clearly a less useful approach, since changing the authorizations for a given role can now no longer be done centrally.

Role Constraints

Once we have rationalized our authentications through the introduction of roles we are in a position to enforce a ‘least privilege’ policy, in which roles are granted only sufficient authorizations to perform their function. Similarly, we can now identify and enforce ‘separation of incompatible duties,’ where a user should not be a member of two roles simultaneously. A single user should not, for example, have the role for authorizing a payment and the role for submission of a payment. An elegant role-based security service will provide a means of expressing constraints on role membership, and allow roles to be described hierarchically, such that new roles can be created by the extension or restriction of existing roles. An interesting consequence of expressing entitlements as data – rather than as static rules to be coded for each role – is the idea of delegation. Delegation is the process through which a user transfers his or her entitlements to another user. Delegation of responsibility – and thus entitlement – is quite common in business; people routinely delegate their authority to legal representatives, for example. Delegation is quite simple in a role-based authorization system, although a consequence of this is that the applications receiving entitlements have to expect dynamically changing entitlement sets for any user, and create user interfaces accordingly. The role constraint mechanism also allows us to address a problem in delegation, where we want the person to be able to delegate an entitlement but not to posses that entitlement themselves. This can occur where we want a supervisor to grant entitlements to his subordinates but not to perform the function themselves for separation of duties concerns.

Conclusion

Role-based security, when implemented for the enterprise, is a means of reducing administrative cost while at the same time enhancing security by removing uncontrolled redundancy and enforcing role membership constraints. For a detailed discussion of the potential cost savings of enterprise role-based security see “The Economic Impact of Role-Based Access Control,” NIST, U.S. Department of Commerce.

First published October 2003 by Dave McComb

Skip to content