Do you want our logo?
Do you want our logo descriptionKnow our brand.
Do you want our logo descriptionKnow our brand.
The concept of communication between humans and machines is a classic of science fiction, albeit a science fiction that seems to be almost within reach, with Siri, Alexa and other voice aides. But how did we get here?
There is a branch of Artificial Intelligence that has long studied how to make machines able to extract information from human language. This area is called Natural Language Processing or NLP.
In this post we will rely on the experience we have in Paradigma in Sentiment Analysis and we will illustrate the evolution of these technologies.
Within NLP there are many subareas, each of them trying to respond to a specific task of extracting information from natural language, since we are still far from achieving true cognitive intelligence.
Examples of these areas may be the Comprehension of Natural Language (NLU) that, along with Speech Transcription are the technologies that use the popular conversation bots recently.
Other examples of NLP can be the detection of topics in a text, entity detection (NER), and others of lower level such as the semantic analysis of the words of a text or the morphosyntactic analysis of sentences.
However, sentiment analysis tries to discern whether the connotation of a phrase is positive, negative or neutral. Normally the objective is to find and classify opinions with respect to a brand or product, but it can also be oriented more generally, simply to study the evolution of sentiment within a group.
On the other hand, it is also a wider area that tries to extract information from the language according to the meaning of the words used, for example, classify texts according to themes.
Finally, the extraction of emotions seeks to find the expressed or implicit emotions in communication,whether it be written or through audiovisual. It is, in a way, an evolution of the analysis of sentiment in which more dimensions than polarity are taken into account.
This can be done by categorizing the emotions by categories (joy, sadness, anger ...) or by representing the emotions in a multidimensional space.
The first Paradigma projects for sentiment analysis began to develop around 2010 and were based on the morphosyntactic analysis of words. Using the Freeling library , the morphological function of the words and their motto or root form was obtained. From this, rules were constructed and the words were classified into different types to create dictionaries.
With the combination of words and rules, the related text was analyzed in a specific entity. This analysis was oriented to medium texts, such as blog posts and forums.
Part of the effort was aimed at detecting when the entity was being talked about and locating spam messages. In addition, a dictionary was created for each project, so behind each project there was a great human work at play.
But let's see how the evolution of Sentiment Analysis has been from that time to the present and what possibilities there will be in the future.
The projects based on that model worked for a while, but soon new factors would appear that would forever alter the landscape of the reputation projects. On the one hand, the adoption of Twitter as a communication platform and, on the other, the appearance of processed Big Data.
Twitter became the main way to air opinions on the Internet and also in a user support channel. This caused many companies to start worrying about their reputation in this social network.
The limited length of the tweets and the volume of them forced to rethink the model of data analysis. Thanks to being finite texts, it was not necessary to make sure that all the opinions expressed in the text referred to the brand in question, nor did we have to worry about the cohesion of a text or anaphoras and cathaforas. In return, the volume of information to be analyzed grew steadily.
By being able to process the tweets individually, Hadoop MapReduce could soon be adopted as the technology with which to process the opinions, which also made it possible to deal with the increasing volume of tweets collected.
Among the projects that were based on this technology we can mention project Magnet, Market-as-a-service (both cofinanced by the CDTI), Eurosentiment Project, set within the financing framework of the EU’s FP7 as well as ad hoc solutions for projects.
Sentiment analysis has many applications, but sometimes it is not enough to draw good conclusions when dealing with huge amounts of data. Sometimes, something is needed in more detail.
The aim was very ambitious: to get a platform for the extraction of emotions from audio, text and video image. The consortium consisted of nine companies with different profiles.
The platform also had other NLP capabilities, such as sentiment analysis or entity extraction, but its main focus was the characterization and extraction of emotions.
The emotions extracted could be represented as categories (joy, sadness, anger, disgust and fear) or three-dimensional in the line of the PAD (Pleasure, Arousal, Dominance) model . Sentiment analysis modules are no longer based on lists and manual rules, but on techniques of Machine Learning, mainly neuronal system networks.
Fort he distributed analysis we used Spark over Mesos, experiencing an improvement in many areas with respect to the development in the old Hadoop 1.x. Other technologies such as HDFS, Docker and Marathon were also used to manage microservices. The MixedEmotions plaform is mostly open source and available to any user.
Since we began to use sentiment analysis in Paradigma, it has evolved a lot. It has gone from being based on more or less complex rules to being based on Machine Learning techniques.
This evolution also goes from being a set of slow and costly processes to be deployed in distributed processing environments to analyzing large volumes of data. On the other hand, there are increasingly more tools for extracting information from texts that respond to the need for finer analysis than a simple good or bad interpretation.
All this makes it increasingly less profitable to create a solution for language analysis from scratch from libraries. In recent times, solutions from Internet giants for Natural Language Analysis in the cloud have appeared, for example in Google Cloud Platform, in Amazon Web Services, in Microsoft Azure and in IBM.
Google Cloud Platform offers the Cloud Natural Language service, with sentiment analysis and entity extraction capabilities in eight languages and detection of topics in English. It also has speech transcription technology with Speech API and conversational bots with Dialogflow.
In November 2017 Amazon launched Amazon Comprehend, with features of sentiment analysis, entity extraction and theme detection among others. Amazon also has Amazon Transcribe for voice-to-text conversion. Without forgetting about Amazon Lex, the conversational bot technology on which Amazon Alexa is based.
IBM's natural language processing service is called Watson and has a service of Natural Language Understanding with ability to analyze sentiment and extract emotions, transcription of voice and conversational bots, among others.
Natural Language Analysis has evolved a lot since its origins. It seems clear that it is with the power of Machine Learning techniques where you can get more reliability in these processes, which makes it more attractive to use services in the cloud from giants like Amazon and Google.
But of course they are not the solution to every problem. There will be many cases where these tools are not appropriate and something more specific is needed.
In any case, it seems certain that the ways in which the machines understand us will continue to evolve and that we will see new tools go out to the market, possibly in the cloud, thanks to the processing power that Big Data techniques can bring to the Machine Learning area.
Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.