Semantic Power Tools – Inference and Composition

Inference is one of the ways that semantic technology simplifies information systems. Many of the manual assertions that must be made in the traditional systems can happen automatically in a semantic model. The more things you can infer, the fewer you need to assert, which greatly simplifies the system. Another way semantic technology reduces complexity is that it reuses predefined concepts through composition. This might sound a lot like the inheritance concept that is widely used in object oriented programming, but it is not. Let’s say that we want to define a trip to the doctor as an event that happens in a clinic. We would call that concept a ‘patient visit.’ Like object-oriented inheritance, the patient visit will inherit some of the attributes associated with the event, such as the date. Unlike object-oriented inheritance, the clinic concept contributes to the definition of what a patient visit is, but the patient visit doesn’t inherit any of the attributes of a clinic. A visit isn’t a type of clinic, yet the clinic is part of the definition of ‘patient visit.’ Semantic concepts are like words that can be assembled in an infinite number of ways. Objects in object-oriented programming are more like phrases; they can definitely be reused but with many more limitations. Reusing classes and properties significantly reduces the number of attributes required to model a business process, so we have much more flexibility in how concepts can be reused.

Why aren’t people achieving the benefits of SOA?

The cynical response is to invoke the “Gartner Hype Cycle” and say it’s been overhyped. But that just begs the question: why do things get overhyped? In general the reason is that someone comes up with a new approach to solving a vexing problem. At first it didn’t even have a name it just solved a specific problem. Eventually someone generalizes it and gives it a name. The very early adopters buy into the problem and how this approach solves the problem. However success creates a problem. The problem is that the early successes get associated with the category (SOA in this case) and new practitioners show up eager to do “SOA” or whatever it is that is trendy. There isn’t a problem with the technology per se, but the new wave of practitioners are more interested in being in the category than actually solving the problem that the approach was designed for. And so the solution becomes decoupled from the problem. No wonder the second wave is disappointed. No wonder there is a hype cycle. And no wonder that as Gartner themselves put it , almost everyone is doing SOA and only about 5% are achieving the benefits. The benefits require discipline not buzzwords and technology

Reuse – Creating Sustainable Models

Since the semantic model is free of structure, it is easier to map concepts to different structural representations enabling reuse of classes and properties. Reuse is a profound way to reduce complexity. In a traditional system, attributes are not reused. Every time you create a new table and put new attributes on it you’ve created additional attributes. Even if you gave them the same name, there is no guarantee that on this new table they might not mean something different. The technology treats them as if they are different. Property reuse enables a drastic reduction in the number of properties needed to represent the complexity of a large domain. In semantics, properties are first-class objects, which means that they exist independent of any class or table. They can be reused and still retain their original meaning, so you need fewer of them. Additionally, because properties are first-class objects we can define relationships between properties. For example, if we declare that the property “hasParent” is a sub-property of the property “has Ancestor” then anyplace we assert that someone has a particular Parent, a semantic reasoner infers that that particular person is also an ancestor of the first person. In semantic modeling the definition of the classes and properties is separate from the process of building applications or database structures. This separation frees up the modeler to focus on the meaning and inclusion criteria for each class and intentionally avoid having to make decisions about how to store, structure or organize the information within the system.

What is ESB?

An Enterprise Service Bus (ESB) is not a product but a style of architectural development. It is essentially SOA done right. The essence of an ESB is that all apps and all services talk only to the bus and not directly to each other. To the extent that they talk to each other they are coupled and violate the advantage ESB is attempting to convey.

Semantic Technology – Modeling The Real World

Semantic technology uses ontologies to describe the business in a way that both humans and machines can understand. Since the semantic schema is independent from actual computer systems, e.g., legacy or future applications and databases, it allows us to find the commonalities across business processes, which serves to greatly simplify the enterprise architecture. semantic arts_schemaGenerally, people believe that the real world is complex and messy and that our information systems bring order out of chaos. However, the opposite is true. The real world tends to be much simpler than how we represent it in our database and applications schemas. For example, take a stapler. Our purchasing system refers to it as an item with a price; our manufacturing system breaks it down into an itemized list of plastic and metal and our administrative system keeps track of how many of them are on the shelf above the printer. However, in the real world it is just a thing that shoots a bit of metal through sheets of paper to fasten them together. Since semantic technology defines things separately from the applications or databases, you only have to do it once. In traditional IT systems, each thing is redefined within the context of what the information system is tracking, thereby creating multiple references about a single thing.

Using Semantic Technology to Simplify Healthcare

Sentara Healthcare is a $3 billion conglomerate that includes hospitals, clinics, physician networks, insurance companies, research centers, etc. It offers thousands of services to over two million patients. It is a very complex business. Always on the cutting edge of applying information systems technology to improving healthcare delivery, Sentara engaged Semantic Arts to build the first comprehensive model of a healthcare delivery system. After an exhaustive analysis of the thousands of interrelated processes required to run a complex healthcare business, the Sentara Healthcare Enterprise Ontology included only 1,276 classes and 397 properties. We are currently applying the ontology as a common denominator to align internal and external data without needing to make any significant increases or structural changes to the definitional schema. These are examples of how large, complex organizations with hundreds of thousands of elements in their collective schemas can create integrated models of reasonable coverage and fidelity with around 1,000 – 2,000 concepts. By taking a semantic approach to building these simple, elegant and powerful schemas, we have found a way to reduce complexity.

Using Simplicity To Improve Cross-Functional Search

At Procter & Gamble, over 10,000 people work in R&D in hundreds of different disciplines. P&G had no traditional information systems capturing the critical information in this brain trust. They engaged Semantic Arts to create an ontology that would organize this vast body of unstructured information into a definitional model that could be used to build future databases and applications. This exercise resulted in the creation of an ontology that had only 400 classes and 200 properties. We then decided to see how the model would change as we drilled down to model the specific elements of actual lines of business. We applied the definitional model to two lines of business, batteries and toothbrushes. We defined these businesses to the level of detail required to build a new database enabling cross-disciplinary search. Because there were many more details to represent, this effort increased the number of classes by 50% while adding only four new properties to the ontology. Considering that a property is the equivalent of a new column in a database table, this exercise proved the elegance of the new schema. The increased simplicity will enable P&G researchers to conduct cross-functional searches without an intimate knowledge of the database structure, making them more efficient, effective and autonomous.

The “Don’t Care” Architecture

In the late 80’s, I was introduced to the “Don’t Care” Architecture by Sherman Woo, of what was then US West (now Qwest). The Internet existed but the World Wide Web didn’t. Sherman was spearheading something he called the “Global Village.” I don’t remember a lot of the specifics of it, although I do remember that the team room he set up in Denver was one of the most eclectic and creative places I’d ever seen (sort of anti-board room, no right angles, projectors, videos and white boards all over the place), wired to other US West locations. What really struck me was how he described his technology stack. At each point in the stack, from the lowest layer (Token Ring v. Ethernet? “Don’t Care”) to the higher layers (Macintosh v. Windows v. OS/2 v. X Windows? “Don’t Care”) He would just repeat that mantra. And the more things you don’t care about the more flexibility you have. It’s tempting and comforting to specify everything in your stack, but it’s empowering to not care.

Changing what doesn’t need to be changed

I’m guessing that many of you puzzle over the same thing I do: “Why do large IT projects cost so much?”As we now know, it’s not the development costs (the development is done) nor the licensing costs (typically a small portion of the total cost). There are many other factors, but the one that I think contributes the most is the cost of unneeded change. Why would people make “unneeded changes”? People make unnecessary changes when, in order to make a desired change, they introduce a number of other changes that weren’t really needed but “came along with the solution.” Imagine you want to paint your house blue, and someone told you they had blue walls, and in order to get your blue house you needed to replace your non-blue walls with their blue walls. Most of us wouldn’t fall for this on our houses but it happens a lot with information systems. Whenever a vendor says that you should implement “best practices,” they are often onto something: Some area of your evolved systems is far from optimum and could stand to be improved. What they don’t tell you is that the cost of this improvement is often replacing many other processes and procedures with others that are scarcely better, but disruptively different. And many of the existing procedures, while only marginally worse than the new ones, have been shaken down in use over decades to adopt to your peculiar set of circumstances. It’s this great number of unnecessary changes to existing processes and procedures, and retraining people, plus the hidden cost of rediscovering the accommodations that the existing procedures have made, that run up the cost of large implementations.

An Information System Fairy Tale

By Dave McComb

[The names have been changed to protect somebody.] Once upon a time there was a firm. The firm had many, many employees and of course had payroll and personnel systems. One day the firm was visited by software vendors who convinced it that its systems were not “state of the art” and that if they were to change systems they would adopt “best practices.” The clients decided that they didn’t want to have anything other than best practices so they decided to implement the new system. When the vendors told the firm how much this would cost, the firm decided that they would have to do a “rigorous cost benefit analysis” and that the project would have an “ROI.” This turned out to be a lot harder than anyone wanted to admit. No one was willing to sign up to head count or any other significant cost reductions. And frankly the existing systems weren’t all that bad. So, instead they decided to rely on “gap analysis” and some “essential features” to sell the ever growing plan. When all was said and done the “essential feature” that everyone agreed to was “support for collective bargaining.” As it turned out, the existing systems just weren’t up to the task of supporting the upcoming major collective bargaining session. There were many other efficiencies touted in the very large feasibility study binders as well. So the vendors toiled away. And the firm’s internal staff toiled away. Budgets of course were busted. But over the protests of many people involved, the system went live. Very few of the anticipated efficiencies were realized, and a surprising number of unexpected inefficiencies were suffered. Many of the firm’s divisions had to bring on additional staff to handle all the “efficiencies” of the new system. And what of the collective bargaining? It turns out that while the new system was overrunning budgets and slipping schedules, the existing staff built the extra functionality needed to support collective bargaining onto the old system. Just in time for the collective bargaining (and also just in time to get replaced by the new system). Nobody saw the irony in this. But they lived happily ever after anyway.

Skip to content