2 min read

Rule ML 2017: Day 2 & 3

Rule ML finished and it provided a great amount of content on a variety of topics to keep researching on. I went away with a lot of fresh ideas on how to improve our Process Engine, Decision Services and overall direction of where these topics are going.

[gallery ids="3405,3406,3408,3409,3410,3411,3412,3413" type="rectangular"]

The highlight of Day 2 was a tutorial on Description Logic, more specifically on Probabilistic Description Logic, where over an hour and a half we covered all the available techniques to deal with Probabilistic data on top of our semantics.

There was also a great talk about Machine Learning Logic-Based Programming where they used machine learning to create programs and train the networks while the user provides more constructs.

Day 3 was fully packed with industrial tracks and the main talk from the IBM guys was about: Machine Learning Optimization and Rules. They covered how most of our decision solutions are based on a single technology stack and we push our users to consume that for their problems, but in reality use cases required several tools to do the job. The main topics covered in this talk were: Machine learning fuels Decision Making and Analytics Wave boost Decision Making. The idea of convergence in a single workflow for machine learning and optimisation problems was highlighted as their future direction. They also wanted to eliminate the Domain Experts needs to write optimisation models, by figuring those out from their Data.

There was also a session about DMN Conformance and TCK community driven (Red Hat, Trisotech and Signavio are currently included in this workflow) which described the current workflow for testing DMN conformance. If you are an open source vendor providing a DMN implementation you should contact the TCK group so your implementation can be validated for conformance with the TCK.

Another talk that caught my attention was about Reasoning about Legal Documents from the Madrid University. They were researching about how to extract semantic information from legal documents such as contracts and rulings to then do temporal evaluations and answer queries about these documents. This make think a lot about how this is extremely similar to the work being done by Hospital Italiano in Buenos Aires - Argentina, where they were extracting data from the Patients clinical history and temporal reasoning was the next logical step. All these research and implementation should be connected by a single framework that enables people to easily define their models, extract information and reason about it.

I'm definitely looking forward to participate more actively in the next year edition.