Ontology engineering

Session 3.4

Time: 
Wednesday, September 13, 2017 - 10:15 to 11:15
Place: 
Room 9

Talks

Industry

Developing a Medicines Catalogue using Linked Data Sources

This presentation describes the issues that led to the creation and setup of a terminology service, the datasets used to provide the information to users, and how the datasets are linked together. The challenges and lessons learnt of the project will also be discussed. We will show what is presented to a user and the benefits they receive. Future developments will also be discussed.
The presentation will begin with a description of Healthdirect, our mission, and the services we provide. We began using PoolParty several years ago to manage a health thesaurus. The thesaurus is used to classify content to improve the search experience of a user, by providing the user with more semantically relevant search results. We discovered, through user research, that many users were searching for medicines information that we did not have. Our solution was to identify relevant and authoritative Australian medicine sources, ingest the data in RDF format into a terminology service, map the relevant relationships, and then aggregate the information to display to a user. It has extended the use of the health thesaurus which we manage in PoolParty and made it more integral as a linked data source, not just as a tool to facilitate search.

Research & Innovation

Siamese Network with Soft Attention for Semantic Text Understanding

We propose a task independent neural networks model, based on a Siamese twin architecture. Our model specifically benefits from two forms of attention scheme which we use to extract high-level feature representation of the underlying texts, both at the word level (intra-attention) as well as at the sentence level (inter-attention). The inter-attention scheme uses one of the text to create a contextual interlock with the other text, thus paying attention to mutually important parts. We evaluate our system on three tasks, i.e. Textual Entailment, Paraphrase Detection and answer-sentence selection. We set a near state-of-the-art result on the textual entailment task with the SNLI corpus while obtaining strong performance across the other tasks that we evaluate our model on.