Call me sentimental, but perhaps some of you, too, are starting to miss the human side of sentiment analysis.
Machine-based statistical analysis is not the entirety of sentiment analysis – or at least it shouldn’t be. There is still an important role for humans to play in that process. In addition there are opportunities for other tools to assist humans in the process, working in complementary ways with statistical-based sentiment analysis tools.
The problem with purely machine-based analyses
These days it seems only machines are able to consume and digest the ever-increasing streams of social media comments, reviews, blogs, tweets and what have you. And just as they are programmed to do, those machines spit out the cold hard stats about how positively or negatively some demographic group or another views your website, brand or product.
One thing is clear in that case — you have a problem that you need to better understand and find ways to resolve. But what do you do now?
You can use whatever statistical analysis tool you have at your disposal to try to drill down deeper into the statistics to get to the heart of the problem. That’s assuming your tool provides such a capability and that you’ve figured out how to use it (or maybe you’re fortunate enough to have a full-time data scientist on staff!).
Does your sentiment analysis tool know that those particular terms – but not necessarily all the terms in close proximity to “confusing” — are related to a common set of semantic concepts around user experience (UX), user interface or website navigation? Does it give you insights as to what specifically users are finding confusing about the UX, and perhaps more importantly, why?
Don’t get me wrong, the “cold hard statistics” that machines deliver from sentiment analysis are quite valuable. Without big data and machine learning algorithms, humans could not begin to keep up with the masses of signals being generated by social media sources these days. This statistical data is an important part of the story – but it only represents one part of the story.
Getting beyond the cold hard stats
The word ‘analysis’ comes from ancient Greek and literally means to break something up into its elemental parts in order to understand it. Often humans toggle back and forth between analysis and its cohort operation, synthesis (i.e. building things back up), in order to understand what they are analyzing in context.
Until machines have what is known as deep learning capabilities, we can’t count on them to effectively play the critical role that humans can play in doing subtle, complex analyses – including analyses of sentiment. That’s why companies like Facebook are now starting to invest in deep learning, but it will likely be years before machines will have such a capability and be able to apply it to sentiment analysis.
Machines can’t easily draw conclusions, produce insights or suggest novel ideas for responding to perceptions and issues arising from consumers in the marketplace. They’re also not good at forming and testing hypotheses. Humans do that.
Machines shouldn’t be thought of as performing sentiment analysis themselves. Instead their role is to augment marketing and branding professionals by providing one element of input into what is a rich and multi-faceted analytical process.
Gaps in current approaches to sentiment analysis
What’s needed is a rich, complementary palette of tools for human professionals in the field of marketing to use for various aspects of sentiment analysis. There are multiple jobs to be done, often involving iteration through all or at least parts of the overall process. And there should be multiple, interoperable tools associated with this process, working collaboratively across the entire canvas of sentiment analysis.
Here are some examples of current gaps in sentiment analysis when only a purely machine-driven, statistical-based approach is employed:
- Specific exemplary content – consider the value of exemplary content as a way to provide human professionals richer contextual understanding of consumer sentiment. While summary statistics are good, there is also value in humans seeing specific instances of the actual comments that stand behind those statistics.
- Semantic networks – using taxonomic structures as “connective tissue” to help organize and understand statistical results is one way to provide more accurate knowledge about the results. The classification systems used for such purposes should not be fixed, rigid systems but rather dynamically created by human professionals interacting with statistical results in a ‘post-processing’ phase involving human-aided analysis and synthesis, rather than purely machine-based analytics.
- Targeted tools to formulate and test hypotheses – particularly in cases where semiotics and other perceptual issues come into play (as is often the case, for example, with brand identity), marketing and branding professionals may want to formulate and test new marketing messages against content that was the subject of the previous sentiment analysis, as well as current streams of new content from the same sources. For example, if your product was a sleep aid and there was negative sentiment around “trouble falling asleep”, you might want to look for evidence in the same content streams or from the same sources to support use of more positive concepts such as “relaxation” or “stress-relief” as the ways your product would help someone fall asleep.
So let’s close these tool gaps and put humans back into the valuable role of guiding sentiment analysis.
Check out our developers site to begin exploring your own custom sentiment analysis solution!