What We Use

Semantic Web and Linked Data

Semantic technology uses formal semantics to help AI systems understand language and process information similar to the way humans do. Thus, they are able to store, manage and retrieve information based on meaning and logical relationships. Various businesses are already using semantic technology and semantic graph databases to manage their content, repurpose and reuse information, cut costs and gain new revenue streams.

What is Semantic Web technology?

Semantic Technology uses formal semantics to give meaning to the disparate and raw data that surrounds us. It enables building relationships between data in various formats and sources, from one string to another, helping build context and creating links out of these relationships. By formalizing meaning independently of data and using W3C’s standards, Semantic Technology enables machines to “understand”, share and reason with data in order to create more value for us, humans.

The core difference between Semantic Technology and other data technologies, the relational database, for instance, is that it deals with the meaning rather than the structure of the data.

It is a set of standards that promote common data formats and exchange protocols on the Web. It allows the data to be shared and reused across applications, enterprise and community borders being, therefore, regarded as the integrator across different content, information application and systems.

The Semantic Web standards enable:

  • solving the data silos problem that most of the private and public enterprises face nowadays.

  • embedding semantic information into the documents making the originally human understandable content also machine-understandable.

  • establishing common vocabularies (ontologies) and mappings between vocabularies and knowledge organisation systems. Doing so provides a harmonised common understanding across groups, enterprises and communities of practice.

  • building automatic agents to perform tasks using structured semantic data and metadata.

  • establishing data-as-service to provide semantic information to agents. This means that the data can be accessed directly not only by the data-set.

  • to take advantage of the value network effect adds to the data

Within an organisation, this set of technologies can be used for knowledge organisation. Among business applications are:

  • facilitating the integration of information from different sources

  • dissolving ambiguities in the corporate terminology

  • facilitating the conceptual interoperability at human and at machine levels

  • improving the information retrieval

  • identifying relevant information with respect to a given domain

  • providing decision making support

  • data storytelling and visualisation

  • address data governance and management needs delivering trustworthy data

Semantic Web have made tremendous leaps in the last ten years maturing into a solid ecosystem of standards and technologies. More and more organisations publish linked open data, a movement that has grown massively every year, and continues to grow a volume of information which collectively is larger than any other single source. Large organisations such as NASA, GE, Johnson & Johnson, Amazon, European Institutions rely on semantic web technologies to run critical daily operations.

Natural Language Processing

One of the most challenging and revolutionary things artificial intelligence (AI) can do is speak, write, listen, and understand human language. Natural language processing (NLP) is a branch of artificial intelligence that helps computers analyse, understand and interpret human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.

What is Natural Language Processing technology?

Large volumes of textual data

Natural language processing helps computers communicate with humans in their own language and scales other language-related tasks. For example, NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment and determine which parts are important. 

Today’s machines can analyze more language-based data than humans, without fatigue and in a consistent, unbiased way. Considering the staggering amount of unstructured data that’s generated every day, from medical records to social media, automation will be critical to fully analyze text and speech data efficiently.

Structuring a highly unstructured data source

Human language is astoundingly complex and diverse. We express ourselves in infinite ways, both verbally and in writing. Not only are there hundreds of languages and dialects, but within each language is a unique set of grammar and syntax rules, terms and slang. When we write, we often misspell or abbreviate words, or omit punctuation. When we speak, we have regional accents, and we mumble, stutter and borrow terms from other languages. 

While supervised and unsupervised learning, and specifically deep learning, are now widely used for modelling human language, there’s also a need for syntactic and semantic understanding and domain expertise that are not necessarily present in these machine learning approaches. NLP is important because it helps resolve ambiguity in language and adds useful numeric structure to the data for many downstream applications, such as speech recognition or text analytics.