Artificial active

2022 Semantic Technology Trends: Humanizing Artificial Intelligence

Not so long ago, semantic technologies were considered a taboo, almost esoteric branch of data management that few people talked about or openly admitted to using.

Today, with the growing popularity of knowledge graphs (ubiquitous in solutions covering everything from data preparation to analysis) and the growing rise of neuro-symbolic artificial intelligence (which combines the knowledge base from AI to its statistical basis), semantic technologies are being actively researched for a diverse use case across industries.

The most relevant (and suited to these capabilities) involves nearly all forms of natural language technologies for deployments as varied as implementing workflows with cognitive processing automation to conversational AI applications.

According to expert.ai CTO Marco Varone, “A lot is happening in the semantic language understanding space. A lot more has happened in the last three or four years than in the previous 10 to 15. In Over the past few years, change has gone from experiments in semantics and language to real projects.

What is most important in these projects is that they often involve simplifying several aspects of AI concerning everything related to natural language processing. Moreover, by using the semantic inference approach that underpins symbolic AI deployments, organizations are creating an effect that is as profound as it is undeniable.

What they’re doing is making AI itself more humane, explainable, and reliable in production environments, boosting this essential suite of technologies into the next phase of its evolution and usefulness to business. .

“For many, the idea is that the next technology will be so smart that it can learn and somehow manage itself,” Varone explained. “It’s not possible and companies have finally realized that they need a human in the loop.”

human in the loop

The human-in-the-loop precept is one of the ways enterprise AI is becoming more human through semantic approaches. People are instrumental in the business rules that form the basis of automatic reasoning at the core of the AI ​​symbolic semantic method underpins technologies.

Moreover, humans are indispensable to AI approaches involving only the knowledge base of AI, those involving its connectionist base exemplified by machine learning, as well as those based on the intertwining of these two for applications of AI. neuro-symbolic AI. “Human-in-the-loop will shape a lot of things in the coming year, because that means you have to organize your processes in such a way that humans can always add the last part of the value that only humans can. can do,” Varone explained.

Human experts, automatic reasoning

There are two primary ways in which humans are directly responsible for the underlying value of symbolic reasoning for natural language technology use cases. The first involves subject matter experts “enriching the knowledge graph, which is a super trend for work,” Varone revealed. Knowledge graphs can be compiled for any number of domains, including regulations, legal issues, or products; human expertise is essential to populate these applications with the most relevant and curated knowledge. To this end, the second way humans enhance deployments of semantic inference is by assembling the vocabularies, taxonomies, thesauri, and rules that these intelligent systems reason about for applications such as text analytics.

“You have to have your expert person who can put the knowledge, who can use the human person’s capacity for abstraction who can really decide what are the important things and what are the noise things,” Varone said. . Text analytics applications are key to overcoming the unstructured data divide in many areas, including understanding market forces in finance or retail, finding new solutions in pharmaceuticals, and health and security enhancement for various intelligence agencies. “With text analytics knowledge navigation, you have a large amount of information collected internally, externally, and a combination of both, and you want to extract insights to help your knowledge workers,” said clarified Varone.

Humanize Machine Learning

The human expertise that is essential to creating the previously mentioned tools (knowledge graphs, rules, taxonomies and glossaries) to exploit the semantic knowledge base of AI for natural language technologies also applies to statistical deployments of the AI. In particular, humans have come to play a vital role in everything from creating advanced machine learning models to their ongoing performance efficiency. Key ways that subject matter expertise can positively affect these connectionist techniques include:

  • Training data: Data scientists and predictive modelers must frequently consult subject matter experts when refining models with additional training data. “Even to give more data to train your models, you need someone to say it’s a valuable and available source of information, so use it, or it’s not good because of everything that noise,” Varone noted. The intimacy of knowledge about their domains that experts have, which may be beyond the view of data scientists, is key to providing the best training data.
  • Bias: Detecting, correcting, and eliminating model biases are fundamental to keeping models up to Responsible AI standards. “Statistical models can learn bias very quickly,” admitted Varone. “They can learn bad things and if you don’t have an expert it can take a long time to spot. If you have an expert in the loop, you can immediately spot when something is wrong or irrelevant. »
  • Precision: Ultimately, employing experts to validate the outputs of advanced machine learning models inherently increases their accuracy by monitoring events such as model drift, for example. “Experts need to be part of any language learning process,” Varone said. “Because then you can be sure what you’re getting is top quality and…you’ll get better results in the end and spend fewer resources.”

Human-led composite AI

The optimal way to conserve resources, increase efficiency, and hone AI output with semantic techniques is to couple machine learning and symbolic reasoning with what Varone called a “hybrid approach.” . Such hybridization is part of the notion of composite AI introduced by Gartner, in which organizations invoke a plethora of AI methodologies to generate these ideal outcomes. There are many ways to use the reasoning and learning capabilities of AI to improve automatic language understanding processes. Labeling training data for supervised learning deployments is one of the main inhibitors of this approach. Varone cited an example in which, for this purpose, companies could consult an expert who says “yes, you must [annotate data], but it will take 30 days to do it.

However, by employing this subject matter expert to design business rules to annotate the necessary training data, “we can do it in three days,” Varone concluded, which speeds up the time to value. There are also cases where organizations can use machine learning methods to refine or populate the knowledge base on which to build symbolic AI rules. Supervised learning methods are usually included in these efforts, although Varone hinted at the effectiveness of “creating or enriching a knowledge graph in an unsupervised mode”. Whichever approach is used, human involvement is crucial to succeed with these AI opportunities for language processing, although reliance on connectionist and rule-based approaches may not be equal. “Why you’re seeing more and more interest in the hybrid approach is because token mixing is super resource efficient; it’s a thousand times more effective,” observes Varone.

Managing AI with Semantics

Semantic technologies and principles are among the most effective ways to oversee enterprise AI use cases for natural language technologies. These abilities were formed by organized human knowledge, which is why the notion of the human in the loop is so important in contemporary times.

By extending this concept to statistical AI approaches or those involving a composite of the reasoning and learning capabilities of AI, companies are able to accelerate their deployments, improve them and make them as much more in line with the human standards by which their underlying value is ultimately judged. . “Human-in-the-loop is finally understood by everyone that people will be needed in the future,” Varone summed up. “You can’t do without people. You have to give people the best tools, yes. But, people need to be aware.

The underlying systems that people oversee can only benefit from this development, which indeed they are.

About the Author

Jelani Harper is an editorial consultant serving the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1