Semantic Web
Extending C# to understand the language of the semantic web
Feb 5th
I was inspired by a question on semanticoverflow.com which asked if there was a language in which the concepts of the Semantic Web could be expressed directly, i.e. you could write statements and perform reasoning directly in the code without lots of parentheses, strings and function calls.
Of course the big issue with putting the semantic web into .NET is the lack of multiple inheritance. In the semantic web the class ‘lion’ can inherit from the ‘big cat’ class and also from the ‘carnivorous animals’ class and also from the ‘furry creatures’ class etc. In C# you have to pick one and implement the rest as interfaces. But, since C# 4.0 we have the dynamic type. Could that be used to simulate multiple inheritance and to build objects that behave like their semantic web counterparts?
The DynamicObject in C# allows us to perform late binding and essentially to add methods and properties at runtime. Could I use that so you can write a statement like “canine.subClassOf.mammal();” which would be a complete Semantic Web statement like you might find in a normal triple store but written in C# without any ‘mess’ around it. Could I use that same syntax to query the triple store to ask questions like “if (lion.subClassOf.animal) …” where a statement without a method invocation would be a query against the triple store using a reasoner capable of at least simple transitive closure? Could I also create a syntax for properties so you could say “lion.Color(“yellow”)” to set a property called Color on a lion?
Well, after one evening of experimenting I have found a way to do just that. Without any other declarations you can write code like this:
dynamic g = new Graph("graph"); // this line declares both a mammal an an animal g.mammal.subClassOf.animal(); // we can add properties to a class g.mammal.Label("Mammal"); // add a subclass below that g.carnivore.subClassOf.mammal(); // create the cat family g.felidae.subClassOf.carnivore(); // define what the wild things are - a separate hierarchy of things g.wild.subClassOf.domesticity(); // back to the cat family tree g.pantherinae.subClassOf.felidae(); // these one are all wild (multiple inheritance at work!) g.pantherinae.subClassOf.wild(); g.lion.subClassOf.pantherinae(); // experiment with properties // these are stored directly on the object not in the triple store g.lion.Color("Yellow"); // complete the family tree for this branch of the cat family g.tiger.subClassOf.pantherinae(); g.jaguar.subClassOf.pantherinae(); g.leopard.subClassOf.pantherinae(); g.snowLeopard.subClassOf.leopard();
Behind the scenes dynamic objects are used to construct partial statements and then full statements and those full statements are added to the graph. Note that I’m not using full Uri’s here because they wouldn’t work syntactically, but there’s no reason each entity couldn’t be given a Uri property behind the scenes that is local to the graph that’s being used to contain it.
Querying works as expected: just write the semantic statement you want to test. One slight catch is that I’ve made the query return an enumeration of the proof steps used to prove it rather than just a simple bool value. So use `.Any()` on it to see if there is any proof.
// Note that we never said that cheeta is a mammal directly. // We need to use inference to get the answer. // The result is an enumeration of all the ways to prove that // a cheeta is a mammal var isCheetaAMammal = g.cheeta.subClassOf.mammal; // we use .Any() just to see if there's a way to prove it Console.WriteLine("Cheeta is a wild cat : " + isCheetaAMammal.Any());
Behind the scenes the simple statement “g.cheeta.subClassOf.mammal” will take each statement made and expand the subject and object using a logical argument process known as simple entailement. The explanation it might give for this query might be:
because [cheeta.subClassOf.felinae], [felinae.subClassOf.felidae], [felidae.subClassOf.mammal]
As you can see, integrating Semantic Web concepts [almost] directly into the programming language is a pretty powerful idea. We are still nowhere close to the syntactic power of prolog or F# but I was surprised how far vanilla C# could get with dynamic types and a fluent builder. I hope to explore this further and to publish the code sometime. It may well be “the world’s smallest triple store and reasoner”!
This code will hopefully also allow folks wanting to experiment with core semantic web concepts to do so without the ‘overhead’ of a full-blown triple store, reasoner and lots of RDF and angle brackets! When I first came to the Semantic Web I was amazed how much emphasis there was on serialization formats (which are boring to most software folks) and how little there was on language features and algorithms for manipulating graphs (the interesting stuff). With this experiment I hope to create code that focuses on the interesting bits.
The same concept could be applied to other in-memory graphs allowing a fluent, dynamic way to represent graph structures in code. There’s also no reason it has to be limited to in-memory graphs, the code could equally well store all statements in some external triple store.
The code for this experiment is available on bitbucket: https://bitbucket.org/ianmercer/semantic-fluent-dynamic-csharp
A Semantic Web ontology / triple Store built on MongoDB
Jan 5th
In a previous blog post I discussed building a Semantic Triple Store using SQL Server. That approach works fine but I’m struck by how many joins are needed to get any results from the data and as I look to storing much larger ontologies containing billions of triples there are many potential scalability issues with this approach. So over the past few evenings I decided to try a different approach and so I created a semantic store based on MongoDB. In the MongoDB version of my semantic store I take a different approach to storing the basic building blocks of semantic knowledge representation. For starters I decided that typical ABox and TBox knowledge has really quite different storage requirements and that smashing all the complex TBox assertions into simple triples and stringing them together with meta fields only to immediately join then back up whenever needed just seemed like a bad idea from the NOSQL / document-database perspective.
TBox/ABox: In the ABox you typically find simple triples of the form X-predicate-Y. These store simple assertions about individuals and classes. In the TBox you typically find complex sequents, that’s to say complex logic statements having a head (or consequent) and a body (or antecedents). The head is ‘entailed’ by the body, which means that if you can satisfy all of the body statements then the head is true. In a traditional store all the ABox assertions can be represented as triples and all the complex TBox assertions use quads with a meta field that is used solely to rebuild the sequent with a head and a body. The ABox/TBox distinction is however arbitrary (see http://www.semanticoverflow.com/questions/1107/why-is-it-necessary-to-split-reasoning-into-t-box-and-a-box).
I also decided that I wanted to be use ObjectIds as the primary way of referring to any Entity in the store. Using the full Uri for every Entity is of course possible and MongoDB couuld have used that as the index but I wanted to make this efficient and easily shardable across multiple MongoDB servers. The MongoDB ObjectID is ideal for that purpose and will make queries and indexing more efficient.
The first step then was to create a collection that would hold Entities and would permit the mapping from Uri to ObjectId. That was easy: an Entity type inheriting from a Resource type produces a simple document like the one shown below. An index on Uri with a unique condition ensures that it’s easy to look up any Entity by Uri and that there can only ever be one mapping to an Id for any Uri.
RESOURCES COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af69b1f26166cb7606b", "_t": "Entity", "Uri": "http://www.w3.org/1999/02/22-rdf-syntax-ns#first" }
Although I should use a proper Uri for every Entity I also decided to allow arbitrary strings to be used here so if you are building a simple ontology that never needs to go beyond the bounds of this one system you can forgo namespaces and http:// prefixes and just put a string there, e.g. “SELLS”. Since every Entity reference is immediately mapped to an Id and that Id is used throughout the rest of the system it really doesn’t matter much.
The next step was to represent simple ABox assertions. Rather than storing each assertion as its own document I created a document that could hold several assertions all related to the same subject. Of course, if there are too many assertions you’ll still need to split them up into separate documents but that’s easy to do. This move was mainly a convenience for developing the system as it makes it easy to look at all the assertions made concerning a single Entity using MongoVue or the Mongo command line interface but I’m hoping it will also help performance as typical access patterns need to bring in all of the statements concerning a given Entity.
Where a statement requires a literal the literal is stored directly in the document and since literals don’t have Uris there is no entry in the resources collection.
To make searches for statements easy and fast I added an array field “SPO” which stores the set of all Ids mentioned anywhere in any of the statements in the document. This array is indexed in MongoDB using the array indexing feature which makes it very efficient to find and fetch every document that mentions a particular Entity. If the Entity only ever appears in the subject position in statements that search will result in possibly just one document coming back which contains all of the assertions about that Entity. For example:
STATEMENTGROUPS COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af99b1f26166cb760c6", "SPO": [ "4d243af69b1f26166cb7606f", "4d243af69b1f26166cb76079", "4d243af69b1f26166cb7607c" ], "Statements": [ { "_id": "4d243af99b1f26166cb760c5", "Subject": { "_t": "Entity", "_id": "4d243af69b1f26166cb7606f", "Uri": "GROCERYSTORE" }, "Predicate": { "_t": "Entity", "_id": "4d243af69b1f26166cb7607c", "Uri": "SELLS" }, "Object": { "_t": "Entity", "_id": "4d243af69b1f26166cb76079", "Uri": "DAIRY" } } ... more statements here ... ] }
The third and final collection I created is used to store TBox sequents consisting of a head (consequent) and a body (antecedents). Once again I added an array which indexes all of the Entities mentioned anywhere in any of the statements used in the sequent. Below that I have an array of Antecedent statements and then a single Consequent statement. Although the statements don’t really need the full serialized version of an Entity (all they need is the _id) I include the Uri and type for each Entity for now. Variables also have Id values but unlike Entities, variables are not stored in the Resources collection, they exist only in the Rule collection as part of consequent statements. Variables have no meaning outside a consequent unless they are bound to some other value.
RULE COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af99b1f26166cb76102", "References": [ "4d243af69b1f26166cb7607d", "4d243af99b1f26166cb760f8", "4d243af99b1f26166cb760fa", "4d243af99b1f26166cb760fc", "4d243af99b1f26166cb760fe" ], "Antecedents": [ { "_id": "4d243af99b1f26166cb760ff", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760f8", "Uri": "V3-Subclass8" }, "Predicate": { "_t": "Entity", "_id": "4d243af69b1f26166cb7607d", "Uri": "rdfs:subClassOf" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fa", "Uri": "V3-Class9" } }, { "_id": "4d243af99b1f26166cb76100", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fa", "Uri": "V3-Class9" }, "Predicate": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fc", "Uri": "V3-Predicate10" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fe", "Uri": "V3-Something11" } } ], "Consequent": { "_id": "4d243af99b1f26166cb76101", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760f8", "Uri": "V3-Subclass8" }, "Predicate": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fc", "Uri": "V3-Predicate10" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fe", "Uri": "V3-Something11" } } }
That is essentially the whole semantic store. I connected it up to a reasoner and have successfully run a few test cases against it. Next time I get a chance to experiment with this technology I plan to try loading a larger ontology and will rework the reasoner so that it can work directly against the database instead of taking in-memory copies of most queries that it performs.
At this point this is JUST AN EXPERIMENT but hopefully someone will find this blog entry useful. I hope later to connect this up to the home automation system so that it can begin reasoning across an ontology of the house and a set of ABox assertions about its current and past state.
Since I’m still relatively new to the semantic web I’d welcome feedback on this approach to storing ontologies in NOSQL databases from any experienced semanticists.
Hybrid Ontology + Relational Store with SQL Server
May 25th
There are many references in the literature to exposing existing SQL data sources as RDF. This is certainly one way to integrate existing databases with semantic reasoning tools but it clearly requires a lot more storage and processing than simply keeping the data in SQL and querying over it directly. So recently I began some experiments to create a hybrid store by merging an ontology triple (quad) store with an existing database. By linking each row in other SQL tables to an Entity in the triple store I can take advantage of their existing columns, indexes, relationships etc. whilst also being able to reason over them. The first part of this is now working, Entities can be derived types stored in separate SQL tables linked only by an Id, and I am now moving on to getting the metadata in place that will provide all of the implied relationships that can be derived from an existing row-structured database into the ontology store – not as duplicated information but as a service that the reasoner will use to get statements about the SQL content. Clearly this will require changes in both the reasoner and the store but I think the net effect will be a much more efficient reasoner able to reason over large volumes of structured information quickly without having to first turn everything into a statement triple.
An ontology triple (quad) store for RDF/OWL using Entity Framework 4
May 12th
This weeks side-project was the creation of an ontology store using Entity Framework 4. An ontology store stores axioms consisting of Subject, Predicate, Object which are usually serialized as RDF, OWL, N3, … Whereas there’s lots of details about these serialization formats, the actual mechanics of how to store and manipulate them was somewhat harder to come by. Nevertheless, after much experimentation I came up with an Entity Model that can store Quads (Subject, Predicate, Object and Meta) or Quins (Subject, Predicate, Object, Meta, Graph). The addition of Meta allows one Axiom to reference another. The addition of Graph allows the store to be segmented making it easy to import some N3 or RDF into a graph, then flush that graph if it is no longer needed or if a newer version becomes available.
The store is currently hooked up to an Euler reasoner that can reason against it, lazily fetching just the necessary records from the SQL database that backs the Entity Model.
Here’s the EDMX showing how I modeled the Ontology Store:
A great video explaining the Semantic Web
May 11th
Posted by Ian Mercer in Commentary
No comments
Web 3.0 from Kate Ray on Vimeo.