NLP
C# Natural Language Engine connected to Microsoft Dynamics CRM 2011 Online
Jun 5th
In an earlier post I discussed some ideas around a Semantic CRM.
Recently I’ve been doing some clean up work on my C# Natural Language Engine and decided to do a quick test connecting it to a real CRM. As you may know from reading my blog, this natural language engine is already heavily used in my home automation system to control lights, sprinklers, HVAC, music and more and to query caller ID logs and other information.
I recently refactored it to use the Autofac dependency injection framework and in the process realized just how close my NLP engine is to ASP.NET MVC 3 in its basic structure and philosophy! To use it you create Controller classes and put action methods in them. Those controller classes use Autofac to get all of the dependencies they may need (services like an email service, a repository, a user service, an HTML email formattting service, …) and then the methods in them represents a specific sentence parse using the various token types that the NLP engine supports. Unlike ASP.NET MVC3 there is no Route registration; the method itself represents the route (i.e. sentence structure) that it used to decide which method to call. Internally my NLP engine has its own code to match incoming words and phrases to tokens and then on to the action methods. In a sense the engine itself is one big dependency injection framework working against the action methods. I sometimes wish ASP.NET MVC 3 had the same route-registration-free approach to designing web applications (but also appreciate all the reasons why it doesn’t).
Another improvement I made recently to the NLP Engine was to develop a connector for the Twilio SMS service. This means that my home automation system can now accept SMS messages as well as all the other communication formats it supports: email, web chat, XMPP chat and direct URL commands. My Twilio connector to NLP supports message splitting and batching so it will buffer up outgoing messages to reach the limit of a single SMS and will send that. This lowers SMS charges and also allows responses that are longer than a single SMS message.
Using this new, improved version of my Natural Language Engine I decided to try connecting it to a CRM. I chose Microsoft Dynamics CRM 2011 and elected to use the strongly-typed, early-bound objects that you can generate for any instance of the CRM service. I added some simple sentences in an NLPRules project that allow you to tell it who you met, and to input some of their details. Unlike a traditional forms-based approach the user can decide what information to enter and what order to enter it in. The Natural Language Engine supports the concept of a conversation and can remember what you were discussing allowing a much more natural style of conversation that some simple rule-based engines and even allowing it to ask questions and get answers from the user.
Here’s a screenshot showing a sample conversation using Google Talk (XMPP/Jabber) and the resulting CRM record in Microsoft CRM 2011 Online. You could have the same conversation over SMS or email. Click to enlarge.
Based on my limited testing this looks like another promising area where a truly fluent, conversational-style natural language engine could play a significant role. Note how it understands email addresses, phone numbers and such like and in code these all become strongly typed objects. Where it really excels is in temporal expressions where it can understand things like “who called on a Saturday in May last year?” and can construct an efficient SQL query from that.
Extending C# to understand the language of the semantic web
Feb 5th
I was inspired by a question on semanticoverflow.com which asked if there was a language in which the concepts of the Semantic Web could be expressed directly, i.e. you could write statements and perform reasoning directly in the code without lots of parentheses, strings and function calls.
Of course the big issue with putting the semantic web into .NET is the lack of multiple inheritance. In the semantic web the class ‘lion’ can inherit from the ‘big cat’ class and also from the ‘carnivorous animals’ class and also from the ‘furry creatures’ class etc. In C# you have to pick one and implement the rest as interfaces. But, since C# 4.0 we have the dynamic type. Could that be used to simulate multiple inheritance and to build objects that behave like their semantic web counterparts?
The DynamicObject in C# allows us to perform late binding and essentially to add methods and properties at runtime. Could I use that so you can write a statement like “canine.subClassOf.mammal();” which would be a complete Semantic Web statement like you might find in a normal triple store but written in C# without any ‘mess’ around it. Could I use that same syntax to query the triple store to ask questions like “if (lion.subClassOf.animal) …” where a statement without a method invocation would be a query against the triple store using a reasoner capable of at least simple transitive closure? Could I also create a syntax for properties so you could say “lion.Color(“yellow”)” to set a property called Color on a lion?
Well, after one evening of experimenting I have found a way to do just that. Without any other declarations you can write code like this:
dynamic g = new Graph("graph"); // this line declares both a mammal an an animal g.mammal.subClassOf.animal(); // we can add properties to a class g.mammal.Label("Mammal"); // add a subclass below that g.carnivore.subClassOf.mammal(); // create the cat family g.felidae.subClassOf.carnivore(); // define what the wild things are - a separate hierarchy of things g.wild.subClassOf.domesticity(); // back to the cat family tree g.pantherinae.subClassOf.felidae(); // these one are all wild (multiple inheritance at work!) g.pantherinae.subClassOf.wild(); g.lion.subClassOf.pantherinae(); // experiment with properties // these are stored directly on the object not in the triple store g.lion.Color("Yellow"); // complete the family tree for this branch of the cat family g.tiger.subClassOf.pantherinae(); g.jaguar.subClassOf.pantherinae(); g.leopard.subClassOf.pantherinae(); g.snowLeopard.subClassOf.leopard();
Behind the scenes dynamic objects are used to construct partial statements and then full statements and those full statements are added to the graph. Note that I’m not using full Uri’s here because they wouldn’t work syntactically, but there’s no reason each entity couldn’t be given a Uri property behind the scenes that is local to the graph that’s being used to contain it.
Querying works as expected: just write the semantic statement you want to test. One slight catch is that I’ve made the query return an enumeration of the proof steps used to prove it rather than just a simple bool value. So use `.Any()` on it to see if there is any proof.
// Note that we never said that cheeta is a mammal directly. // We need to use inference to get the answer. // The result is an enumeration of all the ways to prove that // a cheeta is a mammal var isCheetaAMammal = g.cheeta.subClassOf.mammal; // we use .Any() just to see if there's a way to prove it Console.WriteLine("Cheeta is a wild cat : " + isCheetaAMammal.Any());
Behind the scenes the simple statement “g.cheeta.subClassOf.mammal” will take each statement made and expand the subject and object using a logical argument process known as simple entailement. The explanation it might give for this query might be:
because [cheeta.subClassOf.felinae], [felinae.subClassOf.felidae], [felidae.subClassOf.mammal]
As you can see, integrating Semantic Web concepts [almost] directly into the programming language is a pretty powerful idea. We are still nowhere close to the syntactic power of prolog or F# but I was surprised how far vanilla C# could get with dynamic types and a fluent builder. I hope to explore this further and to publish the code sometime. It may well be “the world’s smallest triple store and reasoner”!
This code will hopefully also allow folks wanting to experiment with core semantic web concepts to do so without the ‘overhead’ of a full-blown triple store, reasoner and lots of RDF and angle brackets! When I first came to the Semantic Web I was amazed how much emphasis there was on serialization formats (which are boring to most software folks) and how little there was on language features and algorithms for manipulating graphs (the interesting stuff). With this experiment I hope to create code that focuses on the interesting bits.
The same concept could be applied to other in-memory graphs allowing a fluent, dynamic way to represent graph structures in code. There’s also no reason it has to be limited to in-memory graphs, the code could equally well store all statements in some external triple store.
The code for this experiment is available on bitbucket: https://bitbucket.org/ianmercer/semantic-fluent-dynamic-csharp
A Semantic Web ontology / triple Store built on MongoDB
Jan 5th
In a previous blog post I discussed building a Semantic Triple Store using SQL Server. That approach works fine but I’m struck by how many joins are needed to get any results from the data and as I look to storing much larger ontologies containing billions of triples there are many potential scalability issues with this approach. So over the past few evenings I decided to try a different approach and so I created a semantic store based on MongoDB. In the MongoDB version of my semantic store I take a different approach to storing the basic building blocks of semantic knowledge representation. For starters I decided that typical ABox and TBox knowledge has really quite different storage requirements and that smashing all the complex TBox assertions into simple triples and stringing them together with meta fields only to immediately join then back up whenever needed just seemed like a bad idea from the NOSQL / document-database perspective.
TBox/ABox: In the ABox you typically find simple triples of the form X-predicate-Y. These store simple assertions about individuals and classes. In the TBox you typically find complex sequents, that’s to say complex logic statements having a head (or consequent) and a body (or antecedents). The head is ‘entailed’ by the body, which means that if you can satisfy all of the body statements then the head is true. In a traditional store all the ABox assertions can be represented as triples and all the complex TBox assertions use quads with a meta field that is used solely to rebuild the sequent with a head and a body. The ABox/TBox distinction is however arbitrary (see http://www.semanticoverflow.com/questions/1107/why-is-it-necessary-to-split-reasoning-into-t-box-and-a-box).
I also decided that I wanted to be use ObjectIds as the primary way of referring to any Entity in the store. Using the full Uri for every Entity is of course possible and MongoDB couuld have used that as the index but I wanted to make this efficient and easily shardable across multiple MongoDB servers. The MongoDB ObjectID is ideal for that purpose and will make queries and indexing more efficient.
The first step then was to create a collection that would hold Entities and would permit the mapping from Uri to ObjectId. That was easy: an Entity type inheriting from a Resource type produces a simple document like the one shown below. An index on Uri with a unique condition ensures that it’s easy to look up any Entity by Uri and that there can only ever be one mapping to an Id for any Uri.
RESOURCES COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af69b1f26166cb7606b", "_t": "Entity", "Uri": "http://www.w3.org/1999/02/22-rdf-syntax-ns#first" }
Although I should use a proper Uri for every Entity I also decided to allow arbitrary strings to be used here so if you are building a simple ontology that never needs to go beyond the bounds of this one system you can forgo namespaces and http:// prefixes and just put a string there, e.g. “SELLS”. Since every Entity reference is immediately mapped to an Id and that Id is used throughout the rest of the system it really doesn’t matter much.
The next step was to represent simple ABox assertions. Rather than storing each assertion as its own document I created a document that could hold several assertions all related to the same subject. Of course, if there are too many assertions you’ll still need to split them up into separate documents but that’s easy to do. This move was mainly a convenience for developing the system as it makes it easy to look at all the assertions made concerning a single Entity using MongoVue or the Mongo command line interface but I’m hoping it will also help performance as typical access patterns need to bring in all of the statements concerning a given Entity.
Where a statement requires a literal the literal is stored directly in the document and since literals don’t have Uris there is no entry in the resources collection.
To make searches for statements easy and fast I added an array field “SPO” which stores the set of all Ids mentioned anywhere in any of the statements in the document. This array is indexed in MongoDB using the array indexing feature which makes it very efficient to find and fetch every document that mentions a particular Entity. If the Entity only ever appears in the subject position in statements that search will result in possibly just one document coming back which contains all of the assertions about that Entity. For example:
STATEMENTGROUPS COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af99b1f26166cb760c6", "SPO": [ "4d243af69b1f26166cb7606f", "4d243af69b1f26166cb76079", "4d243af69b1f26166cb7607c" ], "Statements": [ { "_id": "4d243af99b1f26166cb760c5", "Subject": { "_t": "Entity", "_id": "4d243af69b1f26166cb7606f", "Uri": "GROCERYSTORE" }, "Predicate": { "_t": "Entity", "_id": "4d243af69b1f26166cb7607c", "Uri": "SELLS" }, "Object": { "_t": "Entity", "_id": "4d243af69b1f26166cb76079", "Uri": "DAIRY" } } ... more statements here ... ] }
The third and final collection I created is used to store TBox sequents consisting of a head (consequent) and a body (antecedents). Once again I added an array which indexes all of the Entities mentioned anywhere in any of the statements used in the sequent. Below that I have an array of Antecedent statements and then a single Consequent statement. Although the statements don’t really need the full serialized version of an Entity (all they need is the _id) I include the Uri and type for each Entity for now. Variables also have Id values but unlike Entities, variables are not stored in the Resources collection, they exist only in the Rule collection as part of consequent statements. Variables have no meaning outside a consequent unless they are bound to some other value.
RULE COLLECTION - SAMPLE DOCUMENT { "_id": "4d243af99b1f26166cb76102", "References": [ "4d243af69b1f26166cb7607d", "4d243af99b1f26166cb760f8", "4d243af99b1f26166cb760fa", "4d243af99b1f26166cb760fc", "4d243af99b1f26166cb760fe" ], "Antecedents": [ { "_id": "4d243af99b1f26166cb760ff", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760f8", "Uri": "V3-Subclass8" }, "Predicate": { "_t": "Entity", "_id": "4d243af69b1f26166cb7607d", "Uri": "rdfs:subClassOf" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fa", "Uri": "V3-Class9" } }, { "_id": "4d243af99b1f26166cb76100", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fa", "Uri": "V3-Class9" }, "Predicate": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fc", "Uri": "V3-Predicate10" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fe", "Uri": "V3-Something11" } } ], "Consequent": { "_id": "4d243af99b1f26166cb76101", "Subject": { "_t": "Variable", "_id": "4d243af99b1f26166cb760f8", "Uri": "V3-Subclass8" }, "Predicate": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fc", "Uri": "V3-Predicate10" }, "Object": { "_t": "Variable", "_id": "4d243af99b1f26166cb760fe", "Uri": "V3-Something11" } } }
That is essentially the whole semantic store. I connected it up to a reasoner and have successfully run a few test cases against it. Next time I get a chance to experiment with this technology I plan to try loading a larger ontology and will rework the reasoner so that it can work directly against the database instead of taking in-memory copies of most queries that it performs.
At this point this is JUST AN EXPERIMENT but hopefully someone will find this blog entry useful. I hope later to connect this up to the home automation system so that it can begin reasoning across an ontology of the house and a set of ABox assertions about its current and past state.
Since I’m still relatively new to the semantic web I’d welcome feedback on this approach to storing ontologies in NOSQL databases from any experienced semanticists.
Hybrid Ontology + Relational Store with SQL Server
May 25th
There are many references in the literature to exposing existing SQL data sources as RDF. This is certainly one way to integrate existing databases with semantic reasoning tools but it clearly requires a lot more storage and processing than simply keeping the data in SQL and querying over it directly. So recently I began some experiments to create a hybrid store by merging an ontology triple (quad) store with an existing database. By linking each row in other SQL tables to an Entity in the triple store I can take advantage of their existing columns, indexes, relationships etc. whilst also being able to reason over them. The first part of this is now working, Entities can be derived types stored in separate SQL tables linked only by an Id, and I am now moving on to getting the metadata in place that will provide all of the implied relationships that can be derived from an existing row-structured database into the ontology store – not as duplicated information but as a service that the reasoner will use to get statements about the SQL content. Clearly this will require changes in both the reasoner and the store but I think the net effect will be a much more efficient reasoner able to reason over large volumes of structured information quickly without having to first turn everything into a statement triple.
An ontology triple (quad) store for RDF/OWL using Entity Framework 4
May 12th
This weeks side-project was the creation of an ontology store using Entity Framework 4. An ontology store stores axioms consisting of Subject, Predicate, Object which are usually serialized as RDF, OWL, N3, … Whereas there’s lots of details about these serialization formats, the actual mechanics of how to store and manipulate them was somewhat harder to come by. Nevertheless, after much experimentation I came up with an Entity Model that can store Quads (Subject, Predicate, Object and Meta) or Quins (Subject, Predicate, Object, Meta, Graph). The addition of Meta allows one Axiom to reference another. The addition of Graph allows the store to be segmented making it easy to import some N3 or RDF into a graph, then flush that graph if it is no longer needed or if a newer version becomes available.
The store is currently hooked up to an Euler reasoner that can reason against it, lazily fetching just the necessary records from the SQL database that backs the Entity Model.
Here’s the EDMX showing how I modeled the Ontology Store:
Applying the Semantic Web to Home Automation
Apr 26th
Recently I’ve been considering how the Semantic Web will impact home automation.
Technologies like the Web Ontology Language (OWL) and RDF allow for the construction of complex ontologies that define what things are, and how they relate. Using these ontologies automated reasoning can be applied to generate new facts or to prove or disprove assertions.
This sounds like the ideal companion to the Natural Language Processing (NLP) Engine that I have already created for the my home automation system. With reasoning powers added to the natural language engine and the ability to augment the knowledge base by adding new assertions the whole system will be much more powerful. One day it might even be possible to create the entire home definition using a natural language text file and to query the system using rich natural language queries.
So, the first step is to find an existing ontology store and reasoning engine. A quick web search reveals that most are built in Java. There were a couple of links I came across later for .NET: http://razor.occams.info/code/semweb/ and http://www.intellidimension.com/products/semantics-server/. There’s also an interesting Q&A site at http://semanticoverflow.com which has lots of useful information on it.
But rather than starting with some existing library I really wanted to understand more deeply how an ontology store works and how a reasoning engine functions, so over the course of a couple of evenings I created my own. I now have a triple store and a simple reasoning engine. Here’s an actual conversation so you can see what it’s capable of so far and can perhaps get a glimpse at how powerful this concept could be:-
house is a class
contains is a property
contain is a property
contains is the same as contain
contain is the same as contains
contains is transitive
contain is transitive
first floor is a class
room is a class
kitchen is a room
first floor contains kitchen
house contains first floor
does house contain kitchenHouse: Yes, house contain kitchen because [house contains first floor] -> [first floor contains kitchen]
As you can see my semantic store can already represent classes, relationships between classes, new relationships (‘contains’), relationships between relationships (‘same as’). For such a small amount of code it’s quite surprising what this system can now handle in terms of knowledge representation and simple reasoning.
Next time I get some spare time I’ll hook it up to the actual home model so you can start to query that in much more powerful ways than before.
Stay tuned!
A strongly-typed natural language engine (C# NLP)
Feb 28th
Here is an explanation of the natural language engine that powers my home automation system. It’s a strongly-typed natural language engine with tokens and sentences being defined in code. It currently understands sentences to control lights, heating, music, sprinklers, … You can ask it who called, you can tell it to play music in a particular room, … it tells you when a car comes down the drive, when the traffic is bad on I-90, when there’s fresh snow in the mountains, when it finds new podcasts from NPR, … and much more.
The natural language engine itself is a separate component that I hope one day to use in other applications.
Existing Natural Language Engines
- Have a large, STATIC dictionary data file
- Can parse complex sentence structure
- Hand back a tree of tokens (strings)
- Don’t handle conversations
C# NLP Engine
- Defines strongly-typed tokens in code
- Uses type inheritance to model ‘is a’
- Defines sentences in code
- Rules engine executes sentences
- Understands context (conversation history)
Sample conversation
Goals
- Make it easy to define tokens and sentences (not XML)
- Safe, compile-time checked definition of the syntax and grammar (not XML)
- Model real-world inheritance with C# class inheritance:
- ‘a labrador’ is ‘a dog’ is ‘an animal’ is ‘a thing’
- Handle ambiguity, e.g.
play something in the air tonight in the kitchen remind me at 4pm to call john at 5pm
C# NLP Engine Structure
Tokens – Token Definition
- A hierarchy of Token-derived classes
- Uses inheritance, e.g. TokenOn is a TokenOnOff is a TokenState is a Token. This allows a single sentence rule to handle multiple cases, e.g. On and Off
- Derived from base Token class
- Simple tokens are a set of words, e.g. « is | are »
- Complex tokens have a parser, e.g. TokenDouble
A Simple Token Definition
public class TokenPersonalPronoun : TokenGenericNoun { internal static string wordz { get { return "he,him,she,her,them"; } } }
- Recognizes any of the words specified
- Can use inheritance (as in this example)
A Complex Token
public abstract class TokenNumber : Token { public static IEnumerable<TokenResult> Initialize(string input) { …
- Initialize method parses input and returns one or more possible parses.
TokenNumber is a good example:
- Parses any numeric value and returns one or more of TokenInt, TokenLong, TokenIntOrdinal, TokenDouble, or TokenPercentage results.
The catch-all TokenPhrase
public class TokenPhrase : Token
TokenPhrase matches anything, especially anything in quote marks
e.g. add a reminder "call Bruno at 4pm"
The sentence signature to recognize this could be
(…, TokenAdd, TokenReminder, TokenPhrase, TokenExactTime)
This would match the rule too …
add a reminder discuss 6pm conference call with Bruno at 4pm
TemporalTokens
A complete set of tokens and related classes for representing time
- Point in time, e.g. today at 5pm
- Approximate time, e.g. who called at 5pm today
- Finite sequence, e.g. every Thursday in May 2009
- Infinite sequence, e.g. every Thursday
- Ambiguous time with context, e.g. remind me on Tuesday (context means it is next Tuesday)
- Null time
- Unknowable/incomprehensible time
TemporalTokens (Cont.)
Code to merge any sequence of temporal tokens to the smallest canonical representation,
e.g.
the first thursday in may 2009
->
{TIMETHEFIRST the first} + {THURSDAY thursday} + {MAY in may} + {INT 2009 -> 2009}
->
[TEMPORALSETFINITESINGLEINTERVAL [Thursday 5/7/2009] ]
TemporalTokens (Cont.)
Finite TemporalClasses provide
All TemporalClasses provide
Existing Token Types
- Numbers (double, long, int, percentage, phone, temperature)
- File names, Directories
- URLs, Domain names
- Names, Companies, Addresses
- Rooms, Lights, Sensors, Sprinklers, …
- States (On, Off, Dim, Bright, Loud, Quiet, …)
- Units of Time, Weight, Distance
- Songs, albums, artists, genres, tags
- Temporal expressions
- Commands, verbs, nouns, pronouns, …
Rules – A simple rule
/// <summary> /// Set a light to a given state /// </summary> private static void LightState(NLPState st, TokenLight tlight, TokenStateOnOff ts) { if (ts.IsTrueState == true) tlight.ForceOn(st.Actor); if (ts.IsTrueState == false) tlight.ForceOff(st.Actor); st.Say("I turned it " + ts.LowerCased); }
Any method matching this signature is a sentence rule:- NLPState, Token*
Rule matching respects inheritance, and variable repeats … (NLPState st, TokenThing tt, TokenState tokenState, TokenTimeConstraint[] constraints)
Rules are discovered on startup using Reflection and an efficient parse graph is built allowing rapid detection and rejection of incoming sentences.
State – NLPState
- Every sentence method takes an NLPState first parameter
- State includes RememberedObject(s) allowing sentences to react to anything that happened earlier in a conversation
- Non-interactive uses can pass a dummy state
- State can be per-user or per-conversation for non-realtime conversations like email
- Chat (e.g Jabber/Gtalk)
- Web chat
- Calendar (do X at time Y)
- Rich client application
- Strongly-typed natural language engine
- Compile time checking, inheritance, …
- Define tokens and sentences (rules) in C#
- Strongly-typed tokens: numbers, percentages, times, dates, file names, urls, people, business objects, …
- Builds an efficient parse graph
- Tracks conversation history
- Company names, locations, documents, …
- From TimeExpressions
User Interface
Works with a variety of user interfaces
Summary
Future plans
Expanded corpus of knowledge
Generate iCal/Gdata Recurrence
A great video explaining the Semantic Web
May 11th
Posted by Ian in Commentary
No comments
Web 3.0 from Kate Ray on Vimeo.