According to SAS, “Natural language processing (NLP) is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.”
NLP encompasses both natural language understanding, i.e., the ability to comprehend what the intent of the words and phrases are, and natural language generation, or the ability to create phrases that humans might say. Google Assistant, Apple’s Siri, Samsung’s Bixby, and Amazon’s Alexa are examples of technology that utilize NLP. Customers can engage Google Assistant, Siri, and Alexa in a two-way conversation that lets them do countless things via voice, like playing songs, getting directions, making orders, as well as a whole host of other things.
NLP is considered one of the most difficult AI problems because of the human language is incredibly dense and complex. It is far from an exact science. While there might be plenty of rules defining a language’s use, there are usually countless caveats, provisions, and quirks to make definitions easy. Different dialogues, dialects, and the use of tone, like sarcasm, that can completely invert the meaning of a sentence make NLP a highly tricky proposition. English has 26 letters, but these form about a million recognized words, which can then be used to form billions of sentences.
Throwing human dialects into the mix can make a language like English, when spoken between, let’s say, an Australia outback rancher and a Cajun from Louisiana a frustrating experience linguistic for both. Even though they are both, technically, speaking the same language of English, they won’t understand but a few words of each other’s sentences.
So how does complicated NLP technology like Siri, Alexa, Google Assistant, Samsung’s Bixby, or even basic a Facebook or WhatsApp chatbot work? In both simple and highly complex ways, as it turns out. Although English has some extremely simple rules, it also has a lot of highly complex and even non-sensical ones.
For example, while adding an ‘s’ to a standard noun makes it plural, English is also filled with plenty of contranyms, i.e., words that have two opposite meanings, and homonyms, similarly spelled words that have completely different meanings. NLP must look at both the arrangement of words in a sentence, the syntax, as well as the meaning of the words, or the semantics. The latter is, of course, the more difficult kind of information to extract, but it is more important.
NLP starts by defining the functions of individual words, which can be done with the aid of systems like text corpora, a library consisting of millions of words from prose texts that shows every possible meaning of a word reproduced through many different examples. Today’s modern classification programs utilize self-learning AI algorithms to create rules from the text corpora itself automatically, with little to no human intervention.
In step two, knowledge gleaned from syntax is used to understand the structure of a sentence by breaking it down, or parsing, into its phrases by way of a tree diagram, which are known as ‘parse trees’. These parse trees are fed into a machine learning model and a ‘parsing word dependency’ guide is built as a result. However, in a language like English, which is extremely difficult to parse effectively, the resulting tree can appear so flawed as to be useless, but NLP is a constantly learning process and is getting better over time.
Step three moves into the complex realm of semantics. While humans can easily make sense of a sentence with little concern of where a particular word is placed, computers can’t. They must try to define a word by understanding the words that appear with it as well as the placement of it. Utilizing the text corpora, an NLP system must process huge amounts of data to understand how language works, including the unique cases within it.
Sometimes the words are treated as separate entities, while sometimes the system groups them together to represent a single idea. Here, the work done on parse trees can be extremely helpful. Named Entity Recognition, or NER, helps connects the nouns within a sentence to their real-world concepts. A sentence containing words referencing a city, person, company, or product name can easily create structured and meaningful data out of text by just using context cues.
NLP has been around since Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory rolled out ELIZA, an early natural language processing computer program, in 1964. Today’s NLP systems are far more complicated but they still follow the same human-machine interaction procedure – humans talking or inputting information into a computer, the machine capturing that information via auditory or other input channels, a conversion of the audio or text input, a conversion back to audio or other export channel, and the machine responding to the human by playing an audio file or exporting a text file.
Today, the business use cases for NLP are almost endless. An NLP-powered sales assistant can handle complex marketing tasks, like sending out emails and then interpreting the returned responses. Hot leads can be forwarded to a salesperson, cold leads go into the trash, while questionable ones get routed to a manager. NLP can also be used to translate marketing messages from one language into a whole host of other languages.
Other use cases include social media monitoring, sentiment analysis, text analysis, survey analysis, spam filtering, email classification, spelling and grammar checks, smart search, duplicate deletion, translation tools, chatbots, and smart home devices. In the not too distant future, real-time translation of human conversations into multiple languages could be a normal occurrence because NLP will be able to take in a person’s speech in English and immediately convert it into a listener’s chosen language.