What is Your Company’s Background?

Lexalytics was founded in 2003 by Jeff Catlin and Mike Marshall when the company they were working for, LightSpeed Software, was about to be consolidated on the west coast. Jeff convinced the venture funders to give the company to him and Marshall to avoid shutdown costs, move operations to Amherst, and rename the company Lexalytics. After some mad coding, Lexalytics shipped the world’s first commercially available sentiment analysis engine in 2004.

What Problem Does Lexalytics Solve?

There’s simply too much human communication out there to understand, analyze, and act on without AI assistance.

What Does Lexalytics do, and why is it Important to Your Customers?

The Lexalytics Intelligence Platform processes, analyzes and provides insights around a company’s text data – i.e., surveys, call logs, social media posts, message boards, comments, etc. There are 5 modules that can be used alone or in various combinations:

  • Salience is an on-premise, text and sentiment analysis engine for enterprise-level volume.
  • Semantria is Lexalytics’ cloud-based API for scalable text analytics that integrate into third-party platforms, dashboards and apps.
  • Semantria for Excel is a text analytics add-in for Microsoft Excel to quickly and easily analyze both structured and unstructured survey data, generate insights and create reports and data visualizations in one place.
  • AI Assembler is a toolkit for building targeted, machine learning and AI solutions to unique natural language problems. AI Assembler helps accelerate and automate the process of data management, analysis and prediction, with a particular bent towards messy text data.
  • Semantria Storage & Visualization is a content storage, aggregation, search and reporting framework that provides business analysts and marketers a single access point to interact with their data.

Lexalytics differs from other analytics providers in several key ways. Lexalytics has been a leader in ML-driven text analytics and NLP for nearly 15 years, and processes billions of documents per day for Fortune 500 companies and SMBs. While many startups are riding the wave of AI hype and reaping loads of venture capital, Lexalytics has continued to be privately held with no outside funding since its inception. Some of Lexalytics’ “firsts” include:

  • the first commercial sentiment analysis engine in 2004
  • the first Twitter/microblog-specific text analytics in 2010, including support for emoticons, hashtags, and emoticons
  • the first semantic understanding based on Wikipedia in 2011
  • the first unsupervised ML model for syntax analysis in 2014, leading to world-beating speed and scalability of deep syntax analysis

In addition, Lexalytics has always valued its strong ties to the research community and academia, and in January 2017, the company launched its Magic Machines AI Labs in partnership with the University of Massachusetts Amherst’s Center for Data Science and Northwestern University’s Medill School of Journalism, Media and Integrated Marketing Communications to drive innovation in AI.

lexalytics

Lexalytics’ technological approach to text analytics also differs from competitors, in that it employs a hybrid model using machine learning in combination with rules and code; the resulting system is both trainable and tunable. Systems that are solely machine learning based require historical data and markup time in order to train the system, whereas Lexalytics’ method allows for direct tuning, only requiring a few hours of time to accurately process data.  Systems that are solely rules-based are brittle and often can’t be configured to capture all of the possible instances of something as complex as say, “food.”

Also important to Lexalytics’ global customers is the fact that it offers some of the broadest language support in the industry, with more than 20 languages supported.

How Does Lexalytics use AI?

The Lexalytics Intelligence Platform helps businesses work better with text.  This could be as part of a decision support system, or an analytics offering, or predictions.  It could be looking backwards, or peering forwards. 

Text is language.  To understand text is to understand meaning, in a way, it is to understand the nature of being human, by being able to untangle how we communicate.  We focus on conversational text – as examples, social media or customer feedback. That sort of communication is particularly messy and fraught with layers of meaning.  And it’s changing all the time…consider those teenagers who don’t want their parents following what they’re saying!

To understand the meaning of a piece of text, you need to understand three separate things simultaneously. One:  The syntax (what words are related to what other words, and how they act on each other), the semantics (what the words mean, and any modifications that are made), and the context (what this syntax and semantics mean coming from this person, in this place, at this time, talking about this topic).

Syntax follows certain rules, more or less.  The basic structure of a sentence is relatively fixed.  The order of a paragraph is well understood.  Do you remember doing sentence diagrams?  Yeah, those things.  There is a rough equivalent of those in the understanding of natural language, it’s called a “parse tree” or “dependency parse” – basically, a diagram that shows how each word is related to each other word. 

A complete understanding of the syntax of the sentence.  Dependency parsing is computationally expensive, and isn’t how we, as people, understand sentences.  You don’t draw a sentence diagram in your head, you just know that “Bob hit the ball” means that Bob acted on the ball by hitting it.

This turned out to be a tricky problem to solve – getting the sophistication of a dependency parse with the speed of thought.  It turns out that we can teach machines to understand how sentences work just like you learned as a kid – by showing it billions of phrases.  (You probably required fewer phrases.  Computers aren’t as bright as you.)  

The computer learned, for example, that if it were to see a verb like “ate” it is probably looking for a noun like “apple” and not a noun like “house.”  Certain things have a property of “eatability,” or “throwability” (think “ball,” not “mountain”). So, the Syntax Matrix, as we call it, is the first of 3 bits of AI that I will explain.  (There’s lots more, but these are 3 examples.)

Semantics next.  To grasp the meaning of a statement, you need to know things like “an apple is a tree” and “an apple is a fruit” and “most apples are edible” and “Honeycrisp is the most superior apple ever.”  (It is.) You need to understand that a jaguar is a kind of cat, and is sort of close to a lion, but further away is a dog, and still further a tomato, and even further away is the concept of linear algebra, and the idea of love. 

Semantics involve relationships of concepts, and those relationships are, much like syntax, learned over a lifetime of reading, speaking and listening.  What’s a poor computer to do? 

It turns out that we’ve done a neat job of producing a resource that’s almost perfect for teaching an AI to understand semantics.  It is open-source, freely licensed, and multi-lingual. It is also encyclopedic, which is a useful property when using the content for training purposes – each page is about one topic, and introduces other, related topics.  In case you hadn’t guessed yet, that resource is Wikipedia. 

One of the most difficult resources to gather is multi-lingual semantic information.  Wikipedia is completely changed the game by providing well organized information in many different languages.  There are certainly limitations, as some branches of Wikipedia are far better maintained than others.  It was not written for use by machines, and doesn’t have a top-down information architecture. 

In other words, it’s messy, just like us.  But it is well organized enough to extract useful semantic information and train a machine on the semantics of human languages.

Last is context.  Context is really the final frontier of natural language understanding. There are several angles to context. The topic of conversation will change the meaning of the language being used (“I will definitely return to this hotel.”  vs. “I will definitely return these shoes.”), the type and medium of the communication will change word usage (“lolz that’s 😎” vs. “Cryo-EM structure of Escherichia coli σ70 RNAP and promoter DNA complex revealed a role of σ non-conserved region during the open complex formation”), the age of the author, the location of the author, socio-economic status, all of these factor into the context of the communication.

Many of these can be inferred, or if you know things about the author, you can use what you know to help with understanding the context.  Lexalytics uses AI understanding to help with a number of aspects of context, and we will be the first to admit that we have a long, long ways to go here.  One of the most interesting aspects is what we call “intentions.”  An intention is something like “buy,” “sell,” “quit,” “recommend.”   They are a direct signal of a planned future action.  Back to the word “return” – in the case of the hotel, that’s both a “buy” signal and a “recommend” signal.  In the case of shoes, it’s a “quit” intention. 

Language is full of ambiguities.  Unlike engineered computer languages, human language has been messily evolved and added to and morphed over time.  It is only by mimicking the process by which we acquire language that we can teach computers to understand.  And therein is the scope of Artificial Intelligence.

In Your Mind, What is the Future of AI in Business and Marketing?

AI will eventually be used in all areas of the sales funnel to make the jobs of people throughout the organization easier and more effective. We can already use AI to drive content generation/curation, speech and text recognition, personalized offers, and sales optimization. All of these processes will only improve over time.

Let’s take a bit of the funnel to explain this, we’ll use the standard “awareness,” “preference,” “purchase,” “advocate” stages.

For awareness, you need to define a target market that needs to be made aware of the problem, your offering, and how your offering addresses the problem.  If you start to see success, then it would be nice to find more people/organizations like where you’re having success.  That sounds like an AI problem to me – “find me more like this.”  One of the first AI implementations is around lead scoring, so if you have a bunch of leads coming in, helping you figure out which ones are the best to serve first.

Preference – you need to show that your solution is better than everyone else’s. So, how about using AI to understand who you’re talking to, and assemble them into personas.  We’ve done that at Lexalytics for our inbound leads – we know the personas and what they ask about, which helps us tune the content to accelerate the sales process.  You could also understand where they are in the buying process by understanding what content they’re looking at – using AI to understand the likely path that someone who is implementing vs. someone who is kicking tires.

Purchase – do you know how much to charge?  Can you optimize your pricing any better?

Advocate – Who are the influencers?  What sort of content are they most interested in?  Can you get your customers to speak in terms that the influencers will repeat, thus multiplying your effectiveness.

Every stage will have AI in it.  It’s like asking “do you think computers are going to affect marketing?”

Do You Think Marketers Will be Replaced by AI Robots?

Lol, no. 

AI will provide powerful frameworks for decision support systems, and will eventually go so far as to prescribe a course of action.  However, at least for my lifetime, there will be marketeers in the mix. 

Do You Have Any Other Thoughts on AI in Business and Marketing?

The power of AI is undeniable.  It is truly world changing, and will affect everything we do, from microwaving our food to deciding on the best launch plan.  But…

AI is so incredibly over-hyped right now.  The only way for perception to go is down, as people understand that these are not trivial tasks, that you cannot simply say “hey there, neural net, go off and tell me which companies I should go after next.” 

In addition to being disappointed, some end users are going to suffer spectacular AI fails. It behooves everyone to get knowledgeable, because there is a lot of possibility.  There’s also a lot of snake oil and a lot of risk in going “all-in.” Don’t go all-in.  Go “somewhat-in.”  Experiment.  Try. Pick a point project with clearly defined goals, and invest a small to moderate amount of money.  That way you’ll see clear benefit and learn at the same time.

Don’t ignore AI.  It is happening. It’s just a question of how fast and how messy. 

In this interview, Seth Redmore provided fascinating insights into how artificial intelligence can be used for text- and sentiment analysis, at scale. I think marketing departments will increasingly use tools like this to make sense of all the text data that is available in most companies.

unemyr_linkedin_horizontal_eng

I am an author, speaker and consultant in marketing automation and artificial intelligence.

Do you need help with marketing automation or AI-based solutions? Contact me and let’s discuss how I can help you!