Semantic Messages

Interfaces and Interactions

Too Much Specificity And Not Enough Play

I recently saw this tweet and it reminded me about something I’ve wanted to think and talk about.

book

Satnam continues

configuration management has not had the attention enjoyed by academic research for languages and networking, as well as language and networking innovations in industry.

I don’t think a “configuration language” is the solution, nor is a domain specific language / library (DSL).

I tend to agree. I think perhaps we should explore more loosey-goosey, declarative approaches. That is, I’d like to explore systems with more play (as in “scope or freedom to act or operate”).

I’d like to see more semantic messages that convey the spirit rather than the letter. When you can’t foresee all the consequences of the letter then that’s when the spirit can help.

That’s what I’d like to think about in this post.

Let’s see an example of such a loosey-goosey semantic message.

Semantic Messages

I’m writing another blog post on what “semantic” means in the semantic web. I’ll put a link here once I am done but in the mean time think of “semantic” as getting different things (different people, different machines, people and machines, etc.) to see eye to eye. Yes, a tall order, but I’m optimistic about it.

The hypothetical situation is that I have an instance of Apache Jena Fuseki (a database for RDF) running on my local machine. There is a software agent (semantic web style) running on my local machine that knows how to interact with Apache Jena Fuseki. I am running a software agent (semantic web style) to whom I make requests.

I have a file on my machine that I want to load into a dataset on the Apache Jena Fuseki instance. I type this request to my agent “load /mnt/toys/gifts.ttl into Apache Jena Fuseki listening on port 3030 at dataset ‘gifts’ on 25 Dec early in the morning.”

My agent produces the following RDF (or I do by some other means) in TriG serialization:

@prefix : <https://example.com/> .
@prefix gist: <https://ontologies.semanticarts.com/gist/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix schema: <http://schema.org/> .

:message0 a gist:Message ;
  gist:comesFromAgent [ gist:name "Justin Dowdy" ;
                        gist:hasAddress [ gist:containedText "[email protected]" ] ] ;
  gist:isAbout :message0content .

:message0content a gist:Content, :NamedGraph ;
  rdfs:comment "this named graph is the content of the message" .

:message0content {
    :message0content gist:hasGoal :goal0 .
  :goal0 a gist:Goal ;
    rdfs:comment "this is the goal specified in the content of the message" ;
    gist:isAbout :goal0content .
}

:goal0content a gist:Content , :NamedGraph ;
  rdfs:comment "this named graph is the content of the goal" .

:goal0content {
  [ a gist:Event ;
    gist:produces [ a gist:Content ;
                    gist:isBasedOn [ a gist:FormattedContent ;
                                     gist:hasAddress [ gist:containedText "file:///mnt/toys/gifts.ttl" ] ;
                                     gist:isExpressedIn [ a gist:MediaType ;
                                                          schema:encodingFormat "application/turtle" ] ] ;
                    gist:isPartOf [ a gist:Content ;
                                    gist:name "gifts" ;
                                    rdfs:comment 'the dataset called "gifts"' ;
                                    gist:isPartOf [ a gist:System ;
                                                    gist:hasAddress [ gist:containedText "http://127.0.0.1:3030" ] ;
                                                    gist:name "Apache Jena Fuseki" ] ] ] ;
   gist:plannedStartDateTime "2022-12-25T01:00:00Z"^^xsd:dateTime ]
}

Side Note

You might notice that I’ve used the URI of an RDF named graph in the place where a resource would typically be expected. With this blog post I am also thinking about using named graphs to represent the content of goals (gist:Goal). Really a named graph could represent the content of many different types of things.

Back to the semantic message example

My agent then puts that RDF onto the semantic message bus (the bus where agents listen for and send RDF) on my local machine. The agent that governs Apache Jena Fuseki sees the RDF and recognizes that it knows how to handle the request.

The Fuseki agent that interprets that RDF needs to know some things.

The Fuseki agent needs to know things like:

  • that it is capable of and allowed to handle requests to load data into the Apache Jena Fuseki running on localhost at port 3030
  • how to use GSP or some other programmatic method to load data into Fuseki
    • how to reference a dataset or optionally create one if the desired on does not exist
  • how to delay the execution of this (since the gist:plannedStartDateTime is in the future)

My agent needs to know things like:

  • it is allowed to make assumptions
    • e.g. if I leave off the year, when I am talking about a goal, when I reference a date then I probably mean whatever year that date occurs in next
  • it can look in my existing graphs (perhaps my “personal knowledge graphs”) to gather information

Fuseki’s agent can’t be too finicky about interpreting the RDF. The RDF isn’t really a request conforming to a contract; it is more of a spirit of a request.

If you are familiar with RDF and gist, the spirit of the RDF is pretty clear “early in the morning on December 25th find the file /mnt/toys/gifts.ttl and load it into the dataset ‘gifts’ on the Apache Jena Fuseki server running on localhost at port 3030.”

If the agent saw this message or a similar message but it knew the message content wasn’t sufficient for it to do anything then it would reply, by putting RDF onto the semantic message bus, with the content of another goal as if to say “did you mean this?” There could be a back and forth between my agent and the agent governing Apache Jena Fuseki as my agent figures out how to schedule the ingestion of that data.

But this time Fuseki’s agent knew what to do. It runs the following command:

at 25 dec 1am <<~
curl -X POST 'http://127.0.0.1:3030/gifts/data' -H 'Content-type: text/turtle' --data-binary @/mnt/toys/gifts.ttl
~

My agent receives some confirmation RDF and the gifts should be available, via SPARQL, before the kids wake up on Christmas morning.

The Article

In this post I’m mostly sketching out some of the consequences of the ideas presented in this 2001 Scientific American article.

Standardization can only go so far, because we can’t anticipate all possible future needs.

Right on.

The Semantic Web, in contrast, is more flexible. The consumer and producer agents can reach a shared understanding by exchanging ontologies, which provide the vocabulary needed for discussion.

I’m less optimistic that we’ll sort out useful ontology exchange anytime soon. In the mean time I think picking a single upper ontology that is squishy in the right ways is a path forward.

Semantics also makes it easier to take advantage of a service that only partially matches a request.

I think for semantics to work in this way we have to accept that our systems will get more adaptive at the cost of becoming less brittle.

Brittle:

  • by design, shouldn’t ever be wrong
  • when it sees something unexpected it stops or breaks

Adaptive:

  • by design, could be wrong
  • when it sees something unexpected it tries to figure it out

That might be hard for people to accept. Perhaps it is why we haven’t progressed much on these kind of agents since the 2001 article.

Closing

I haven’t sketched everything out. For example, what if the command fails on 25 Dec because the file is missing? I’d expect the Fuseki agent to tell my agent. Also maybe my agent could periodically check that the file is accessible and report back to me if it isn’t.

Anyway, I imagine you get the idea.

I do think a requirement of semantic message buses is that all agents must have the same world view and speak the same language. Ontologies set the world view and language. I used the gist upper ontology for my example.

Maybe make an agent! Or let me know what you think about this stuff.

Software Development process expressed in a Knowledge Graph

More times than not we receive pushback on the use of RDF (the standard model for data exchange on the web) being difficult. However, the simplicity of triplestores (Subject – Predicate – Object) and logically composing queries with the written language make this form ideal for technical business users, capability owners, and data scientists alike. By possessing the deep business knowledge, a business user’s inquisitive nature result in an endless list of questions to answer. Why wait for an engineer to get the request weeks later when greater accessibility can be achieved with semantic web capabilities?

We bring this idea to enriching the meaningfulness of capturing these answers by making some thoughtful RDF for enhanced interrogation in a git repository.

Motivation
A while back cURL’s creator, Daniel Stenberg, tweeted some stats on cURL’s git repository.

I have a pretty good idea about how he answered those questions. I bet he used some tools like sed, awk, and grep. If I had to answer those questions I too might use those CLI utilities with a throw away shell pipeline. But I wondered what it would be like to answer those questions semantic web style.

What
In order to answer questions semantic web style you first have to find or make some thoughtful RDF. I say “thoughtful” because it is possible, though mostly not desirable, to use the semantic web stack (RDF/SPARQL/OWL/SHACL, etc.) without doing much domain modeling.

In our case we can easily get some structured data to start with. Here is a git commit I just made:

Notice how compact that representation is. The meat of that text is the unified output format of the diff tool. If you work with git much you probably recognize what most of that is. But the semantic web isn’t about just allowing you to work with data you already know how to decipher. To participate in the semantic web we need to unpack this compact application-centric representation into a data-centric representation so that others don’t need to do the deciphering. In the semantic web we want data to wear its meaning on its sleeve.

That compact representation is fine for the diff and patch tools but doesn’t really check any of these boxes:

Ok, I ran my conversion tool on that commit and it transformed that representation into a thoughtful RDF graph.

Let’s take a look (using RDFox’s graph viz):

You’ll notice that commits have parts: hunks.

Those hunks, when applied, produce contiguous lines.

Those contiguous lines:

occur in a text file with a name
are identified by a line number
have a magnitude with a unit of measure (line count)
and have the literal contained text
Note that I’ve used Wikidata entities because Wikidata is a nice hub in the semantic web. Here is a Wikidata subgraph with labels that are relevant for the RDF I’ve produced:

By the way, don’t let those Q numbers scare you. I don’t memorize them (well I do have wd:Q2 memorized since it is pretty special). I use auto completion in my text editor and Wikidata has it here too. You just type wd: then press control-enter then type what you want. Also, Wikidata has some good reasons for using opaque IRIs.

Here is the RDF graph (the same one as in the image above) in turtle serialization:

Here is another commit:

And the RDF:

You’ll notice that this commit does a little more. The hunk produces contiguous lines as before. The hunk also affects contiguous lines. That is because this commit does not add a new file; it changes an existing file by replacing some contiguous lines with some other contiguous lines.

Why
At this point maybe you’re wondering why the data isn’t more “direct.” The RDF seems to spread things out and use generic predicates (produces, occurs in, etc.). That is intentional.

My conversion utility does use some intermediate “direct” data:


But that data does not snap together with other data like RDF does. It does not have formal semantics. It has not unpacked the meaning of the data. It is more like an ad hoc projection of data. It is not something I would want to pass around between applications.

There are some nice things about using RDF to express the content of a git repository. This is not a comprehensive list but rather just stuff that I thought of while doing this project:

(1)

You can start anywhere with queries.

If you want to find all things with names you just:

select * where {
?s gist:name ?name .
}

If you want to find all files with names:

select * where {
?s a wd:Q86920 .
?s gist:name ?name .
}

You don’t need to know structurally where these “fields” live.

(2)

You define things in terms of more primitive things.

For example, if you look on Wikidata you’ll see that commit is defined in terms of changeset, and version control. Hunk is defined in terms of diff unified format and line.

Eventually definitions bottom out in really primitive things that aren’t defined in terms of anything else.

One of the reasons this is helpful is that you can query against the more primitive things and get back results containing more composite things (built up from the more primitive things).

(3)

You are encouraged (if you use a thoughtful upper ontology such as Gist) to unpack meaning.

I think of the semantic web as something like the exploded part diagram for the web’s data.

Yes, it takes up more space than a render of fully assembled thing but all the components you might want to talk about are addressable and their relationship to other components is evident.

One example of how not unpacking makes question answering harder is how Wikidata packs up postal code ranges with an en dash (–).

If you query Wikidata to see what region has postal code “10498” allocated to it you won’t find any results. You’ll instead have to write a query to find a postal code (some of them are really a range of postal codes designated with an en dash) by making a procedure that gets the start and stop symbols (numbers in this case) and enumerates the range and does a where in or something similar.

If you require users to unpack all your representations before they use them then maybe they’ll lose interest and move on to something else.

A thoughtful ontology will help you carve the world at its joints, putting points of articulation between things, by having a thoughtful set of generic predicates. You might not be using a thoughtful ontology if you can connect any two arbitrary things with a single edge.

The unified output format for diff works well for the git and patch programs but not for humans asking questions.

Sure, unpacked representations mean more data (triples) but the alternatives (application-centric data, LPGs/RDF-Star, etc.) are like bodge wires:

They are acceptable for your final act, maybe , but not something you’d want to build upon.

(4)

RDF allows for incremental enrichment.

As a follow up to this project I think it would be interesting to transform CWEs (Common Weakness Enumeration) and CVEs (Common Vulnerabilities and Exposures) into RDF and connect them to the git repositories where the vulnerability producing code is.

(5)

More people can ask questions of the data.

SPARQL is a declarative query language. The ease of using SPARQL has a bit to do with the thoughtfulness of the domain modeling.

Below I pose several questions to the data and I obtain answers with SPARQL.

Answering Questions About cURL
The cURL git repo has about 29k commits and commits going back to 1999.

My conversion tool turned it into just under 8 million triples in 70 minutes. I haven’t focused on execution efficiency yet. I wanted to run queries against the data to get a feel the utility of this approach before I refine the tool.

Let’s answer some questions about the development of cURL.

How many deleted lines per person?

Result:

How many deleted files per person?

What files did a particular person delete and when?

Which commits affected lib/http.c in 2019 only?

Which persons have authored commits with the same email but different names?

Result:

Which persons have authored commits with the same name and different email?

Result:

Which persons have authored commits in libcurl’s lib/ directory (this includes deleting something in there)?

And depending on how you count the people the query finds between 678 and 727 people that authored commits in libcurl’s lib/ directory. That was Daniel’s first question. He got 629 with his method but that was a few months ago and I don’t know exactly what his method of counting was. He may not have included the act of deleting a file in that directory like I did.

To answer his next three questions I’d need to record each commit’s parent commit (I don’t yet — one of my many TODOs) and simulate the application of hunks in SPARQL or add the output of git blame to the RDF. Daniel likely used the output of git blame. I’ll think about adding it to the RDF.

How
In another blog post I might describe how the conversion utility works. It is written in Clojure and it uses SPARQL Anything (which is built upon Apache Jena). I expect to push it to Github soon.

Closing Thoughts
It is fun to imagine having all the git repos in Github as RDF graphs in a massive triplestore and asking questions with SPARQL.

In my example queries I didn’t make use of the fact that each source code line is in the RDF. Most triplestores have full text search capabilities so I’ll write some queries that make use of that too. In general I haven’t been overly impressed with search built-into Gitlab and Bitbucket (I haven’t used Github’s search much) so I wonder if keeping an RDF representation with full text search would be a useful approach. I’d love to see a SPARQL endpoints for searching hosted git platforms!

I think this technique could be applied to other application-centric file formats. SPARQL Anything gets you part of the way there for several file formats but I’d like to hear if you have other ideas.

Join the discussion on twitter!

Resisting the Temptation of Fused Edges

Fused Edges

If you are doing domain modeling and using a graph database you might be tempted to use fused edges. You see them around the semantic web. But you should resist the temptation.

What

In a graph database a fused edge occurs when a domain modeler uses a single edge where a node and two edges would be more thoughtful. To me a fused edge feels like running an interstate through an area of interest and not putting an exit nearby. It also feels like putting a cast on a joint that normally articulates.

Here is an example of a fused edge:

fused edges

And here is what that fused edge looks like in turtle (a popular RDF graph serialization):

:event01 :venueName "Olive Garden" .

You can see the fusion of edges in the name of the edge usually: there is a “venue” and there is a “name.”

Here is a more thoughtful representation:

 articulating edges

with an additional point of articulation: the venue.

:event01 :occursIn :venue01 .
:venue01 :name "Olive Garden" .

Here is another common fused edge:

:person02 :mothersMaidenName "Smith" .

vs.

:person02 :hasMother :person01 .
:person01 :maidenName "Smith" .

Why

I can think of three (two I heard other people say) reasons why fused edges might be used. Let’s use the event and venue example.

  1. Your source data may not have details about the venue other than its name.

  2. “you get better #findability with dedicated properties”

  3. Fewer nodes in a graph likely means fewer hardware resources are required.

Let me attempt to persuade you that you should mostly ignore those reasons to use fused edges.

(1)

One of the ideas of the semantic web is AAA: Anyone can say Anything about Any topic.

It is hard for someone to say something about the venue (perhaps its address, current owner, hours of operation, other events that occur there, etc.) if no node exists in the graph for it. With the fused edge, if someone does come along later and they want to express the venue’s address it is not a straight forward update. You’d have to make a new venue node, find the event node in the graph, find all the edges expressing facts about the venue and move them to the new venue node, then connect the event to the new venue node. Finding all the edges hanging off of the event that express facts about the venue will likely be a manual effort — there probably won’t be clever data for the machine to use that says :venueName is not a direct attribute of the event but rather it is a direct attribute of the venue not yet represented in the graph.

Also, fused edges encourage the use of additional fused edges. If you don’t have a node to reference then a modeler might make more fused edges in order to express additional information.

(2)

Giving a shortcut a name can be valuable, yes.

But I think if you use a shortcut the details that the shortcut hides should also be available. If you use fused edges those details are not available; there is only the shortcut.

There are ways to have dedicated properties without sacrificing the details.

In SPARQL you can use shortcuts: property paths. In OWL you can define those shortcuts: property chains.

In a SPARQL query you could just do

?event :occursIn/:name ?venue_name .

Or you could define that in OWL

:venueName  owl:propertyChainAxiom  ( :occursIn  :name ) .

And if you have an OWL 2 reasoner active you can just query using the shortcut you just defined

?event :venueName ?venue_name .

(3)

Ok, using fused edges does reduce the number of triples in your graph. I can put a billion triples in a triplestore on my laptop and query durations will probably be acceptable. If I put 100 billion triples on my laptop query durations might not be acceptable. Still I think I would rather consider partitioning the data and using SPARQL query federation rather than fusing edges together to reduce resource requirements. I say that because I reach for semantic web technologies when I think radical data interoperability and serendipity would be valuable.

Fused edges and radical data interoperability don’t go together. Fused edges are about the use cases you currently know about and the data you currently have. Graphs with thoughtful points of articulation are about the use cases you know about, those you discover tomorrow, and about potential data. Points of articulation in a graph suggest enrichment opportunities and new questions.

Schema.org

Schema.org is a well known ontology that unfortunately has lots of fused edges.

If you run this SPARQL query against schema.ttl you’ll see some examples.

PREFIX  schema: <https://schema.org/>
PREFIX  rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?s ?com
WHERE
  { graph ?g {
      ?s rdfs:comment ?com .
      {
          GRAPH ?g
          { ?s  schema:rangeIncludes  schema:URL
            MINUS
              { ?s  schema:rangeIncludes  ?o
                FILTER ( ?o != schema:URL )
              }
          }
      }
  }
}

That query finds properties that are intended to have only instances of schema:URL in the object position.

You get these bindings:

s com
https://schema\.org/sameAs URL of a reference Web page that unambiguously indicates the item’s identity. E.g. the URL of the item’s Wikipedia page, Wikidata entry, or official website.
https://schema\.org/additionalType An additional type for the item, typically used for adding more specific types from external vocabularies in microdata syntax. This is a relationship between something and a class that the thing is in. In RDFa syntax, it is better to use the native RDFa syntax – the ‘typeof’ attribute – for multiple types. Schema.org tools may have only weaker understanding of extra types, in particular those defined externally.
https://schema\.org/codeRepository Link to the repository where the un-compiled, human readable code and related code is located (SVN, github, CodePlex).
https://schema\.org/contentUrl Actual bytes of the media object, for example the image file or video file.
https://schema\.org/discussionUrl A link to the page containing the comments of the CreativeWork.
https://schema\.org/downloadUrl If the file can be downloaded, URL to download the binary.
https://schema\.org/embedUrl A URL pointing to a player for a specific video. In general, this is the information in the “`src“` element of an “`embed“` tag and should not be the same as the content of the “`loc“` tag.
https://schema\.org/installUrl URL at which the app may be installed, if different from the URL of the item.
https://schema\.org/map A URL to a map of the place.
https://schema\.org/maps A URL to a map of the place.
https://schema\.org/paymentUrl The URL for sending a payment.
https://schema\.org/relatedLink A link related to this web page, for example to other related web pages.
https://schema\.org/replyToUrl The URL at which a reply may be posted to the specified UserComment.
https://schema\.org/serviceUrl The website to access the service.
https://schema\.org/significantLinks The most significant URLs on the page. Typically, these are the non-navigation links that are clicked on the most.
https://schema\.org/significantLink One of the more significant URLs on the page. Typically, these are the non-navigation links that are clicked on the most.
https://schema\.org/targetUrl The URL of a node in an established educational framework.
https://schema\.org/thumbnailUrl A thumbnail image relevant to the Thing.
https://schema\.org/trackingUrl Tracking url for the parcel delivery.
https://schema\.org/url URL of the item.

You can see that most of those object properties are fused edges.

e.g.

schema:paymentUrl fuses together hasPayment and url

schema:trackingUrl fuses together hasTracking and url

schema:codeRepository fuses together hasCodeRepository and url

etc.

I think each of those named shortcuts would be fine if they were built up from primitives like

:codeRepositoryURL  owl:propertyChainAxiom  ( :hasCodeRepository  :url ) .

but I might not put them in core Schema.org because then what stops people from thinking all their favorite named shortcuts belong in core Schema.org?

Also if you run that same query with schema:Place (instead of schema:URL) you can see many more fused properties. Maybe I’ll do another post where I catalog all the fused properties in Schema.org.

Wrap it up

If you find yourself in the position of building an ontology (the T-box) then remember that the object properties you create will shape the way domain modelers think about decomposing their data. An ontology with composable object/data properties, such as Gist, encourages domain modelers to use points of articulation in their graphs. You can always later define object properties that build upon the more primitive and composable object properties but once you start fusing edges it could be hard to reel it in.

Please consider not using fused edges and instead use an ontology that encourages the thoughtful use of points (nodes) of articulation. I don’t see how the semantic web can turn down any stereo’s volume when you get a phone call without thoughtful points of articulation.

Final Appeal

If you believe you must use an edge like :venueName then please put something like this in your Tbox: :venueName owl:propertyChainAxiom ( :occursIn :name ) .

Appendix

schema.org way (fused edges)

[ a schema:CreativeWork ;
  a wd:Q1886349 ; # Logo 
  schema:url  "https://i.imgur.com/46JjPLl.jpg" ;
  rdfs:label "Shipwreck Cafe Logo" ;
  schema:discussionUrl  "https://gist.github.com/justin2004/183add3d617105cc9cc7cee013d44198" ]

points of articulation way

[ a schema:UserComments ;
  schema:url "https://gist.github.com/justin2004/183add3d617105cc9cc7cee013d44198" ; 
  schema:discusses [ a schema:CreativeWork ;
                     a wd:Q1886349 ; # Logo 
                     rdfs:label "Shipwreck Cafe Logo" ;
                     schema:url  "https://i.imgur.com/46JjPLl.jpg"
                   ]
]
wd:Q113149564 schema:logo "https://i.imgur.com/46JjPLl.jpg" .

schema:discussionUrl is really a shorthand for the property path: (^schema:discusses)/schema:url. So it is 2 edges fused together in such a way that you can’t reference the node in the middle: the discussion itself. If you can’t reference the node in the middle (the discussion itself) you can’t say when it started, when it ended, who the participants were, etc.

Oh, I think the reason Schema.org has so many fused edges is that it is designed as a way to add semantics to webpages. A webpage is a document… which is often a bag of information. So a fused edge leaving a bag of information doesn’t seem like such a sin. But, personally, that makes me want to do more than attempt to hang semantics off of a bag of information.

Skip to content