Graph Database Superpowers: Unraveling the back-story of your favorite graph databases

The graph database market is very exciting, as the long list of vendors continues to grow. You may not know that there are huge differences in the origin story of the dozens of graph databases on the market today. It’s this origin story that greatly impacts the superpowers and weaknesses of the various offerings.

While Superman is great at flying and stopping locomotives, you shouldn’t rely on him around strange glowing metal. Batman is great at catching small-time hoods and maniacal characters, but deep down, he has no superpowers other than a lot of funding for special vehicles and handy tool belts. Superman’s origin story is much different than Batman’s, and therefore the impact they have on the criminal world is very different.

This is also the case with graph databases. The origin story absolutely makes a difference when it comes to strengths and weaknesses. Let’s look at how the origin story of various graph databases can make all the difference in the world when it comes to use cases for the solutions.

Graph Database Superhero: RDF and SPARQL databases

Examples: Ontotext, AllegroGraph, Virtuoso and many others

Origin Story: Short for Resource Description Framework, RDF is a decades-old data model with origins with Tim Berners-Lee. The thought behind RDF was to provide a data model that allows the sharing of

data, similar to how we share information on the internet. Technically, this is the classic triple-store with subject-predicate-object.

Superpower: Semantic Modeling. Basic understanding of concepts and the relationships between those concepts. Enhanced context with the use of ontology. Sharing data and concepts on the web. These databases often support OWL and SHACL, which help with the process of describing what the data should look like and the sharing of data like we share web pages.

Kryptonite: The RDF original specification did not account for properties on predicates very well. So for example, if I wanted to specify WHEN Sue became a friend of Mary, or the fact that Sue is a friend of Mary, according to Facebook, handling provenance and time may be more cumbersome. Many RDF databases added quad-store options where users could handle provenance or time, and several are adding the new RDF* specification to overcome shortcomings. More on this in a minute.

Many of the early RDF stores were built on transactional architecture, so that they scaled somewhat to handle transactions, but had size restrictions on performing analytics on many triples.

It is in this category that the vendors have had some time to mature. While the origins may be in semantic web and sharing data, many have stretched their superpowers with labeled properties and other useful features.

Graph Database Superhero: Labeled Property graph with Cypher

Example: Neo4j

Origin Story: Short for labeled property graph, the premier player in the LPG was and is Neo4j. According to podcasts and interviews of the founder, the original thought was more about managing content on a web site, where taxonomies gave birth to many-to-many relationships. Neo4j developed its new type of system in order to support its enterprise content management team. So, when you needed to search across your website for certain content, for example, when a company changes its logo, the LPG kept track of how these assets were connected. This is offered as an alternative to the JOIN table in an RDBMS that holds foreign keys of both the participating tables, and this is extremely costly in traditional databases.

SuperPower: Although the origin story is about web site content taxonomies, it turns out that these types of databases were also pretty good for 360-degree customer view applications and understanding multiple supply chain systems. Cypher, although not a W3C or ISO standard, has become a de facto standard language as the Cypher community has grown with Neo4j’s efforts. Neo4j also has been an advocate of the new upcoming GQL standard, which may result in a more capable Cypher language.

Kryptonite: Neo4j has built its own system from the ground up on a transactional architecture. Although some scaling features have recently been added to Neo4j version 4, the approach is more about federating queries rather than an MPP approach. In version 4, the developers have added manual sharding and a new way to query sharded clusters. This requires extra work when sharding and writing your queries. This is a similar approach to transactional RDF stores where SPARQL 1.1, supports integrated federated queries through a SERVICE clause. In other words, you may still encounter limits when trying to scale and perform analytics. Time will tell if the latest federated approach is scalable.

Ontologies and inferencing are not standard features with a property graph, although some capability is offered here with add-ons. If you’re expecting to manage semantics in a property graph, it’s probably the wrong choice.

Graph Database Superhero: Proprietary Graph

Example: TigerGraph

Origin Story: According to their web site, when the founders of TigerGraph decided to write a database, one of the founders was working at Twitter on a project that needed bigger scale graph algorithms than Neo4j could offer. TigerGraph devised a completely new architecture for the data model and storage, even devising its own language for a graph.

Superpowers: Through Tiger, the market could now appreciate that graph databases could now run on a cluster. Although certainly not the first to run on a cluster, this focus was on real power in running end-user supplied graph algorithms on a lot of data.

Kryptonite: The database decidedly went on its own with regard to standards. Some shortcomings on the simplicity of leveraging ontologies, performing inferencing and make use of your people who know either SPARQL or Cypher are apparent. By far the biggest disadvantage to this proprietary graph is that you have to think more about schema and JOINs prior to loading data. The schema model is more reminiscent of a traditional database than any of the other solutions on the market. While it may be a solid solution for running graph algorithms, if you’re creating a knowledge graph by integrating multiple sources and you want to run BI-style analytics on said knowledge graph, you may have an easier time with a different solution.

Interesting to note that although TigerGraph’s initial thinking was to beat Neo4j at proprietary graph algorithms, TigerGraph has teamed up with the Neo4j folks and is in the early stages of making its proprietary language a standard via ISO and SQL. Although TigerGraph releases many benchmarks, I have yet to see them release benchmarks for TPC-H or TPC-DS, standard BI-style analytics benchmarks. Also, due to a non-standard data model, harmonizing data from multiple sources requires some extra legwork and thought about how the engine will execute analytics.

Graph Database Superhero: RDF Analytical DB with labeled properties

Example: AnzoGraph DB

Origin Story: AnzoGraph DB was the brainchild of former Netezza/Paraccel engineers who designed MPP platforms like Netezza, ParAccel and Redshift. They became interested in graph databases, recognizing that there was a gap in perhaps the biggest category of data, namely data warehouse-style data and analytics. Although companies making transactional graph databases covered a lot of ground in the market, there were very few analytical graph databases that could follow standards, perform graph analytics and leverage ontologies/inferencing for improved analytics.

Superpowers: Cambridge Semantics designed a triple store that both followed standards and could scale like a data warehouse. In fact, it was the first OLAP MPP platform for graph, capable of analytics on a lot of triples. It turns out that this is the perfect platform for creating a knowledge graph, facilitating analytics built from a collection of structured and unstructured data. The data model helps users load almost any data at any time.

Because of the schemaless nature, the data can be sparsely populated. It supports very fast in-memory transformations, thus data can be loaded and cleansed later (ELT). Because metadata and Instance data together in the same graph and without any special effort — sure makes all those ELT queries much more flexible, iterative and powerful. With an OLAP graph like AnzoGraph DB, you add any subject-predicate-object-property at any time without having to make a plan to do so.

In traditional OLAP databases, you can have views. In this new type of database, you can have multi-graphs that can be queried as one graph when needed.

Kryptonite: Although ACID compliant, other solutions on the market might support faster transactions due to the OLAP nature of this database’s design. Ingestion of massive amounts of transactions might require additional technologies, like Apache Kafka, to ingest smoothly in high-transactional environments. Like many warehouse-style technologies, data loading is very fast and therefore batch loads are very fast. Pairing an analytical database with a transactional database is also sometimes a solution for companies who have both high transactions and deep analytics to perform.

Other types of “Graph Databases”

A few other types of graph databases that have some graph superpowers. Traditional database vendors have recognized that graph can be powerful and have offered a data model to have a graph model in addition to their native model. For example, Oracle has two offerings. You can buy an add-on package that offers geospatial and graph. In addition, the company offers an in-memory graph that is separate from traditional Oracle.

You can get graph database capabilities in an Apache Hadoop stack under GraphFrames. GraphFrames works on top of Apache Spark. Given Spark’s capability to handle big data, scaling is a superpower. However, given that your requirements might lead you to layering technologies, tuning a combination of Spark, HDFS, Yarn and GraphFrames could be the challenge.

The other solutions give you a nice taste of graph functionality in a solution that you probably already have. The kryptonite here is usually about performance when scaling to billions or trillions of triples and then trying to run analytics on said triples.

The Industry is full of Ironmen

Ironman Tony Stark built his first suit out of scrap parts when he was captured by terrorists and forced to live in a cave. It had many vulnerabilities, but it served it’s one purpose: to get the hero to safety. Later, the Ironman suit evolved to be more powerful, deploy more easily and think on its own. The industry is full of Tony Starks who will evolve the graph database.

However, while evolution happens, remember that graph databases aren’t one thing.

A graph database is a generic term, but simply doesn’t get you the level of detail you need to understand which problem it solves. The industry has developed various methods of doing the critical tasks that drive value in this category we call graph databases. Whether it’s harmonizing diverse data sets, performing graph analytics, performing inferencing and leveraging ontologies, you really have to think about what you’d like to get out of the graph before you choose a solution.

WRITTEN BY

Steve Sarsfield

VP Product, AnzoGraph (AnzoGraph.com). Formerly from IBM, Talend and Vertica. Author of the book the Data Governance Imperative.

Time to Rethink Master and Reference Data

Time to Rethink Master and Reference Data

Every company contends with data quality, and in its pursuit they often commit substantial resources to manage their master and reference data. Remarkably, quite a bit of confusion exists around exactly what these are and how they differ. And since they provide context to business activity, this confusion can undermine any data quality initiative.

Here are amalgams of the prevailing definitions, which seem meaningful at first glance:

Time to Rethink Master and Reference Data

Sound familiar? In this article, I will discuss some tools and techniques for naming and defining terms that explain how these definitions actually create confusion. Although there is no perfect solution, I will share the terms and definitions that have helped me guide data initiatives, processes, technologies, and governance over the course of my career.

What’s in a Name?

Unique and self-explanatory names save time and promote common understanding. Naming, however, is nuanced in that words are often overloaded with multiple meanings. The word “customer,” for instance, often means very different things to people in the finance, sales, or product departments. There are also conventions that, while not exactly precise, have accumulated common understanding over time. The term “men’s room,” for example, is understood to mean something more specific than a room (it has toilets); yet something less specific than men’s (it’s also available to boys).

They’re both “master”

The term “master” data derives from the notion that each individually identifiable thing has a corresponding, comprehensive and authoritative record in the system. The verb to master means to gain control of something. The word causes confusion, however, when used to distinguish master data from reference data. If anything, reference data is the master of master data, as it categorizes and supplies context to master data. The dependency graph below demonstrates that master data may refer to and thus depend on reference data (red arrow), but not the other way around:

They’re both “reference”

The name “reference data” also makes sense in isolation. It evokes reference works like dictionaries, which are highly curated by experts and typically used to look up individual terms rather than being read from beginning to end. But reference can also mean the act of referring, and in practice, master data has just as many references to it as reference data.  

So without some additional context, these terms are problematic in relation to each other.

It is what it is

Although we could probably conjure better terms, “Master Data” and “Reference Data” have become universal standards with innumerable citations. Any clarification provided by new names would be offset by their incompatibility with the consensus

Pluralizations R Us

Whenever possible, it’s best to express terms in the singular rather than the plural since the singular form refers to the thing itself, while the plural form denotes a set. That’s why dictionaries always define the singular form and provide the plural forms as an annotation.  Consider the following singular and plural terms and definitions:

* Note that entity is used in the entity-relationship sense, where it denotes a type of thing rather than an identifiable instance of a thing.

The singular term “entity” works better for our purposes since the job at hand is to classify each entity as reference or master, rather than some amorphous concept of data. In our case, classifying each individual entity informs its materialized design in a database, its quality controls, and its integration process. The singular also makes it more natural to articulate relationships between things, as demonstrated by these awkward counterexamples:

“One bushels contains many apples.”

“Each data contains one or more entities.”

Good Things Come in Threes

Trying to describe the subject area with just two terms, master and reference, falls short because the relationship between the two cannot be fully understood without also defining the class that includes them both.  For example, some existing definitions specify a “disjoint” relationship in which an entity can belong to either reference or master data, but not both. This can be represented as a diagram or tree:

The conception is incomplete because the class that contains both reference and master data is missing.  Are master data and reference data equal siblings among other data categories, as demonstrated below?

That’s not pragmatic, since it falsely implies that master and reference data have no more potential for common governance and technology than, say, weblogs and image metadata. We can remedy that by subsuming master and reference data within an intermediate class, which must still be named, defined, and assigned the common characteristics shared by master and reference data.

Some definitions posit an inclusion or containment relationship in which reference data is a subset of master data, rather than a disjoint peer. This approach, however, omits the complement–the master data which is not reference data.

Any vocabulary that doesn’t specify the combination of master and reference data will be incomplete and potentially confusing.

It’s Just Semantics

Generally speaking, there are two broad categories of definitions: extensional and intensional.  

Extensional Definitions

An extensional definition simply defines an entity by listing all of its instances, as in the following example:

This is out of the question for defining reference or master data, as each has too many entities and regularly occurring additions. Imagine how unhelpful and immediately obsolete the following definition would be:

A variation of this approach, ostensive definition, uses partial lists as examples.  These are often used for “type” entities that nominally classify other things:

Ostensive definitions, unlike extensional definitions, can withstand the addition of new instances. They do not, however, explain why their examples satisfy the term. In fact, ostensive definitions are used primarily for situations in which it’s hard to formulate a definition that can stand on its own.  Therefore both extensive and ostensive definitions are inadequate since they fail to provide a rationale to distinguish reference from master data.

Intensional Definitions 

Intensional definitions, on the other hand, define things by their intrinsic properties and do not require lists of instances.  The following definition of mineral, for example, does not list any actual minerals:

With that definition, we can examine the properties of quartz, for example, and determine that it meets the necessary and sufficient conditions to be deemed a mineral.  Now we’re getting somewhere, and existing definitions have naturally used this approach.  

Unfortunately, the conditions put forth in the existing definitions of master and reference data can describe either, rather than one or the other. The following table shows that every condition in the intensional definitions of master and reference data applies to both terms:

How can you categorize the product entity, for example, when it adheres to both definitions? It definitely conforms to the definition of master–a core thing shared across an enterprise. But it also conforms to reference, as it’s often reasonably stable and simply structured, used to categorize other things (sales), provides a list of permissible values (order forms), and corresponds to external databases (vendor part lists).  I could make the same case for almost any entity categorized as master or reference, and this is where the definitions fail.

Master data in reference data: use intentional definitions

Celebrate Diversity

Although they share the same intrinsic qualities, master and reference data truly are different and require separate terms and definitions. Their flow through a system and their respective quality control processes, for instance, are quite distinct.  

Reference data is centrally administered and stored. It is curated by an authoritative party before becoming available in its system of record, and only then is it copied to application databases or the edge. An organization, for instance, would never let a user casually add a new unit of measure or a new country.

Master data, on the other hand, is often regularly added and modified in various distributed systems. New users register online, sales systems acquire new customers, organizations hire and fire employees, etc. The data comes in from the edge during the normal course of business, and quality is enforced as it is merged into the systems of record.

Master data and reference data change and merge

Companies must distinguish between master and reference data to ensure their quality and proper integration.

Master data and reference data are distinct concepts that require...

Turn The Beat Around

It’s entirely reasonable and common to define things by their intrinsic qualities and then use those definitions to inform their use and handling. Intuition tells us that once we understand the characteristics of a class of data, we can assess how best to manage it. But since the characteristics of master and reference data overlap, we need to approach their definitions differently.

 

In software architecture and design, there’s a technique called Inversion of Control that reverses the relationship between a master module and the process it controls. It essentially makes the module subservient to the process. We can apply a similar concept here by basing our definitions on the processes required by the data, rather than trying to base the processes on insufficiently differentiated definitions. This allows us to pragmatically define terms that abide by the conclusions described above:

  1. Continue to use the industry-standard terms “master data” and “reference data.”
  2. Define terms in the singular form.
  3. Define a third concept that encompasses both categories.
  4. Eschew extensive and ostensive definitions, and use intensional definitions that truly distinguish the concepts

With all that out of the way, here are the definitions that have brought clarity and utility to my work with master and reference data. I’ve promoted the term “core” from an adjective of master data to a first-class concept that expresses the superclass encompassing both master and reference entities.

With core defined, we can use a form of intensional definition called genus differentia for reference and master data. Genus differentia definitions have two parts. The first, genus, refers to a previously defined class to which the concept belongs–core entity, in our case. The rest of the definition, the differentia, describes what sets it apart from others in its class. We can now leverage our definition of core entity as the genus, allowing the data flow to provide the differentia. This truly distinguishes reference and master.

We can base the plural terms on the singular ones:

Conclusion

This article has revealed several factors that have handicapped our understanding of master and reference data:

  • The names and prevailing definitions insufficiently distinguish the concepts because they apply to both.
  • The plural form of a given concept obscures its definition.
  • Master data and reference data are incompletely described without a third class that contains both. 

Although convention dictates retention of the terms “master” and “reference,” we achieve clarity by using genus differentia to demonstrate that while they are both classified as core entities, they are truly distinguished by their flow and quality requirements rather than any intrinsic qualities or purpose.

By Alan Freedman

Connect with the Author

Want to learn more about what we do at Semantic Arts? Contact us!

When is a Brick not a Brick?

They say good things come in threes and my journey to data-centricity started with three revelations.

The first was connected to a project I was working on for a university college with a problem that might sound familiar to some of you. The department I worked in was taking four months to clean, consolidate and reconcile our quarterly reports to the college executive. We simply did not have the resources to integrate incoming data from multiple applications into a coherent set of reports in a timely way.

The second came in the form of a lateral thinking challenge worthy of Edward de Bono: ‘How many different uses for a brick can you think of?’

The third revelation happened when I was on a consulting assignment at a multinational software company in Houston, Texas. As part of a content management initiative we were hired to work with their technical documentation team to install a large ECM application. What intrigued me the most, though, were the challenges the company experienced at the interface between the technology and the ‘multiple of multiples’ with respect to business language.

Revelation #1: Application Data Without the Application is Easy to Work With

The college where I had my first taste of data-centricity had the usual array of applications supporting its day-to-day operations. There were Student systems, HR systems, Finance systems, Facility systems, Faculty systems and even a separate Continuing Education System that replicated all those disciplines (with their own twists, of course) under one umbrella.

The department I worked in was responsible for generating executive quarterly reports for all activities on the academic side plus semi-annual faculty workload and annual graduation and financial performance reports. In the beginning we did this piece-meal and as IT resources became available. One day, we decided to write a set of specifications about what kind of data we needed; to what level of granularity; in what sequence; and, how frequently it should be extracted from various sources.

We called the process ‘data liquefication’ because once the data landed on our shared drive the only way we could tell what application it came from was by the file name. Of course, the contents and structure of the individual extracts were different, but they were completely pliable. Detached from the source application, we had complete freedom to do almost anything we wanted with it. And we did. The only data model we had to build (actually, we only ever thought about it once) was which “unit of production’ to use as the ‘center’ of our new reporting universe. To those of you working with education systems today, the answer will come as no surprise. We used ‘seat’. 

A journey to data-centricity
Figure 1: A Global Candidate for Academic Analytics

Once that decision was taken, and we put feedback loops in to correct data quality at source, several interesting patterns emerged:

  • The collections named Student, Faculty, Administrator and Support Staff were not as mutually exclusive as we originally thought. Several individuals occupied multiple roles in one semester.
  • The Finance categories were set up to reflect the fact that some expenses applied to all Departments; some were unique to individual Departments; and, some were unique to Programs.
  • Each application seemed to use a different code or name or structure to identify the same Person, Program or Facility.

From these patterns we were able to produce quarterly reports in half the time. We also introduced ‘what-if’ reporting for the first time, and since we used the granular concept of ‘seat’ as our unit of production we added Cost per Seat; Revenue per Seat; Overhead per Seat; Cross-Faculty Registration per Seat; and, Longitudinal Program Costs, Revenues, Graduation Rates and Employment Patterns to our mix of offerings as well.

Revelation #2: A Brick is Always a Brick. How it is Used in A Separate Question

When we separate what a thing “is” from how it is used, some interesting data patterns show up. I won’t take up much space in this article to enumerate them, but the same principle that can take ‘one thing’ like an individual brick and use it in multiple ways (paper weight, door stop, wheel chock, pendulum weight, etc.) puts the whole data classification thing in a new light.

The string “John Smith” can appear, for example, as the name of a doctor, a patient, a student, an administrator and/or an instructor. This is a similar pattern to the one that popped up at the university college. As it turns out that same string can be used as an entity name, an attribute, as metadata, reference data and a few other popular ‘sub-classes’ of data. They are not separate collections of ‘things’ as much as they are separate functions of the same thing.

Figure 2: What some ‘thing’ is and how it is used are two separate things

The implication for me was to classify ‘things’ first and foremost as what they refer to or in fact what they are. So, “John Smith” refers to an individual, and in my model surrounding data-centricity “is-a”(member of the set named) Person. On the other side of the equation, words like ‘Student’, ‘Patient’, and ‘Administrator’ for example are Roles. In my declarations, Student “is-a”(member of the set named) Role.

One of the things this allowed me to do was to create a very small (n = 19) number of mutually exclusive and exhaustive sets in any collection. This development also supported the creation of semantically interoperable interfaces and views into broadly related data stores.

Revelation #3: Shape and Semantics Must be Managed Separately and on Purpose

The theme of separation came up again while working on a technical publications project in Houston, Texas. Briefly, the objective was to render application user support topics into their smallest, reusable chunks and make it possible for technical writers to create document maps ranging from individual Help files in four different formats to full-blown, multi-chapter user guides and technical references. What really made the project challenging was what we came to call the ‘’multiple of multiples” problem. This turned out to be the exact opposite challenge of reuse in Revelation #1:

  • Multiple customer platforms
  • Multiple versions of customer platforms
  • Multiple product families (Mainframe, Distributed and Hybrid)
  • Multiple product platforms
  • Multiple versions of product platforms
  • Multiple versions of products (three prior, one current, and one work-in-progress)
  • Multiple versions of content topics
  • Multiple versions of content assemblies (guides, references, specification sheets, for example)
  • Multiple customer locales (United States, Japan, France, Germany, China, etc.)
  • Multiple customer language (English (two ‘flavours’), Japanese, German, Chinese, etc.)

The solution to this ‘factorial mess’ was not found in an existing technology (including the ECM software we were installing) but in fact came about by not only removing all architectural or technical considerations (as we did in Revelation #1), but asking what it means to say: “The content is the same” or “The content is different.”

In the process of comparing two components found in the ‘multiple of multiples’ list, we discovered three factors for consideration:

  1. The visual ‘shape’ of the components. ‘Stop’ and ‘stop’ look the same.
  2. The digital signatures of the components. We used MD5 Hash to do this.
  3. The semantics of the components. We used translators and/or a dictionary.

Figure 3 shows the matrix we used to demonstrate the tendency of each topic to be reused (or not) in one of the multiples.

Figure 3: Shape, Signal and Semantics for Content Component Comparison

It turns out that content can vary as a result of time (a version), place (a locale with different requirements for the same feature, for example) people (different languages) and/or format (saving a .docx file as a pdf). In addition to changes in individual components, assemblies of components can have their own identities.

This last point is especially important. Some content was common to all products the company sold. Other content was variable along product lines, client platform, target market and audience. Finally, the last group of content elements were unique to a unique combination of parameters.

Take-Aways

Separating data from its controlling applications presents an opportunity to look at it in a new way. Removed from its physical and logical constraints, data-centricity begins to look a lot like the language of business. While the prospect of liberating data this way might horrify many application developers and data modelers out there, those of us trying to get the business closer to the information they need to accomplish their goals see the beginning of more naturally integrated way of doing that.

The Way Forward with Data-Centricity

Data-centricity in architecture is going to take a while to get used to. I hope this post has given readers a sense of what the levers to making it work might look like and how they could be put to good use.

Click here to read a free chapter of Dave McComb’s book, “The Data-Centric Revolution”

Article by John O’Gorman

Connect with the Author

 

 

 

 

My Path Towards Becoming A Data-Centric Revolution Practitioner

In 1986 I started down a path that, in 2019, has made me a fledgling Data-Centric revolution practitioner. My path towards the Data-Centric revolution started in 1986 with my wife and I founding two micro-businesses in the music and micro-manufacturing industries. In 1998 I put the music business, EARTHTUNES, on hold and sold the other; then I started my Information Technology career. For the last 21 years I’ve covered hardware, software, network, administration, data architecture and development. I’ve mastered relational and dimensional design, working in small and large environments. But my EARTHTUNES work in 1994 powerfully steered me toward the Data-Centric revolution.

In early 1994 I was working on my eighth, ninth and tenth nature sound albums for my record label EARTHTUNES. (See album cover photos below.) The year before, I had done 7 months’ camping and recording in the Great Smoky Mountains National Park to capture the raw materials for my three albums. (To hear six minutes of my recording from October 24, 1993 at 11:34am, right-click here and select open link in new tab, to download the MP3 and PDF files—my gift to you for your personal use. You may listen while you finish reading below, or anytime you like.)

In my 1993 field work I generated 268 hours of field recordings with 134 field logs. (See below for my hand-written notes from the field log.)

Now, in 1994, I was trying to organize the audio recordings’ metadata so that I could select the best recordings and sequence them according to a story-line across the three albums. So, I made album part subtake forms for each take, each few-minutes’ recording, that I thought worthy of going on one of the albums. (See the image of my Album Part Subtake Form, below.)

I organized all the album part subtake forms—all my database metadata entries—and, after months of work, had my mix-down plan for the three albums. In early summer I completed the mix and Macaulay Library of Nature Sound prepared to publish the “Great Smoky Mountains National Park” series: “Winter & Spring;” “Summer & Fall;” and “Storms in the Smokies.”

The act of creating those album part subtake forms was a tipping point towards my becoming a Data-Centric revolution practitioner. In 1994 I started to understand many of the principles defined here and in chapter 2 of Dave McComb’s “The Data-Centric Revolution: Restoring Sanity to Enterprise Information Systems” . Since then I have internalized and started walking them out. The words below are my understandings of the principles, adapted from the Manifesto and McComb’s book.

  • All the many different types of data needed to be included: structured, semi-structured, network-structured and unstructured. Audio recordings and their artifacts; business and reference data; and other associated data, altogether, was my invaluable, curated inter-generational asset. These were the only foundation for future work.
  • I knew that I needed to organize my data in an industry-standard, archival, human-readable and machine-readable format so that I could use it across all my future projects, integrate it with external data, and export it into many different formats. Each new project and whatever applications I made or used would depend completely upon this first class-citizen, this curated data store. In contrast, apps, computing devices and networks would be, relative to the curated data, ephemeral second-class citizens.
  • Any information system I built or acquired must be evolve-able and specialize-able: they had to have a reasonable cost of change as my business evolved; and the integration of my data needed to be nearly free.
  • My data was an open resource that must be shareable, that needed to far outlive the initial database application I made. (I knew that a hundred or so years in the future, climate change would alter the flora and fauna of the habitats I had recorded in; this would change the way those habitats sounded. I was convicted that my field observation data, with recordings, needed to be perpetually accessible as a benchmark of how the world had changed.) Whatever systems I used, the data must have its integrity and quality preserved.
  • This meant that my data needed to have its meaning precisely defined in the context of long-living semantic disciplines and technologies. This would enable successive generations (using different applications and systems) to understand and use my lifework, enshrined in the data legacy I left behind.
  • I needed to use low-code/no-code as much as possible; to enable this I wanted the semantic model to be the genesis of the data structures, constraints and presentation layer, being used to generate all or most data structures and app components/apps (model-driven everything). I needed to use established, well-fitting-with-my-domain ontologies, adding only what wasn’t available and allowing local variety in the context of standardization (specialize-able and single but federated). (Same with the apps.)

From 1994 to the present I’ve been seeking the discipline and technology stacks that a handful of architects and developers could use to create this legacy. I think that I have finally found them in the Data-Centric revolution. My remaining path is to develop full competence in the appropriate semantic disciplines and technology stacks, build my business and community and complete my information system artifacts: passing my work to my heirs over the next few decades.

Article By Jonathon R. Storm

Jonathon works as a data architect helping to maintain and improve a Data-Centric information system that is used to build enterprise databases and application code in a Data-Centric company. Jonathon continues to, on weekends, record the music of the wilderness; in the next year he plans to get his first EARTHTUNES website online to sell his nature sound recordings: you can email him at [email protected] to order now.

Skip to content