Topic Modeling Company Reviews with LDA

img

Surveys and open-ended feedback are among many of the data types and datasets that we may come into contact with as I/Os. Whether it's the open-ended section of an annual engagement survey, feedback from annual reviews, or customer feedback, the text that is provided is often difficult to do much with at scale. However, there are unsupervised machine learning methods that have provided us glimpses into how to make sense of this data. In the previous article I worked through how we might use LSA to accomplish the task of topic modeling along with a brief look at the data we'll be using today and a background on turning words into vectors. If you would like more detail on word vectorization or processing the data to be used by LDA I'd refer you to the article linked above.

For this article I'll walk through another topic modeling technique known as Latent Dirichlet Allocation (LDA).

  1. Singular Value Decomposition (SVD), which Latent Semantic Analysis (LSA) is based off of.
  2. Latent Dirichlet Allocation (LDA)
  3. Cluster Analysis (K-Means)

We will again examine the Cons from the Glassdoor Reviews of retailers we extracted in an earlier article to compare to what we found with LSA.

In [2]:
import numpy as np
from sklearn import decomposition
from scipy import linalg
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter("ignore", category=PendingDeprecationWarning)
import seaborn as sns
import pandas as pd
from sklearn.feature_extraction import stop_words
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
In [3]:
# load the data
df = pd.read_csv('data/glassdoor_data.csv')
df['cons'] = df['cons'].astype(str)

LDA

Latent Dirichlet Allocation is another method for topic modeling that is a "Generative Probabilistic Model" where the topic probabilities provide an explicit representation of the total response set. The first publication for the use of LDA in machine learning was by a few of the biggest names in the machine learning field in Andrew Ng and the Michael Jordan....oh you guys know a different famous Michael Jordan besides the computer scientist from UC-Berkeley? Here is the original article.

Discuss the background of LDA in simple terms.

I think the original article does a good job of outlining the basic premise of LDA, but I'll attempt to go a bit deeper. This text is from the original article.

img

So, we have.

  1. The documents are represented as a set of random words over latent topics.
  2. Each latent topic is a distribution over the words.

By identifying/assigning a set # of topics we derive a latent layer. In this way the latent layer acts as an intermediary where the words connect to the latent topic and the topic connects to the document or response in this case.

For those interested in a deeper conceptual understanding of how LDA works I'd recommend this article from which I borrowed this image, which I feel does a nice job of helping to visualize the document to latent topic and word to latent topic relationship.

img

For the purposes of this article we will again leverage a scikit-learn implementation of the algorithm.

Word Vectorization

1st we'll need to vectorize the responses. An outline of vectorization was discussed in the previous article, so I'd point readers to that article if review is needed.

In an effort to first replicate the SVD/LSA model from the first article we will use the tf-idf methodology for vectorizing our responses, but Blei, et al. (2003) explicitly mention in their paper that tf-idf may not be necessary for the LDA method given the probabilistic nature of the model, so we will compare the results from the tf-idf LDA to those extracted from a count vectorizer methodology as well.

img

Gridsearch

Many machine learning models have parameters that can be adjusted beforehand. These are often referred to as hyperparameters. You can use default numbers for this or you can test different sets of numbers. One method to test these different combinations is referred to as a grid search. GridsearchCV, stands for GridSearch Cross Validation and is what is an exhaustive search or what is commonly referred to in the field of computer science as a brute force approach. It tests all combinations of values provided. If the number of parameters are limited and the values within the parameters are rather small this can be accomplished relatively quickly. However, as the number of parameters increase and the values tested increased the number of combinations can quickly become extremely large. In instances like this a randomized grid search or a bayesian grid search may be preferred. For this example we will be only testing two hyperparameters (learning decay and number of topics), so we will leverage the brute force method.

In [3]:
vectorizer = TfidfVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(df['cons']).todense()
vectors.shape
Out[3]:
(21453, 11986)
In [4]:
vocab = np.array(vectorizer.get_feature_names())
In [5]:
# Load the LDA model from sk-learn
from sklearn.decomposition import LatentDirichletAllocation as LDA
from sklearn.model_selection import GridSearchCV
In [8]:
# Define Search Param
search_params = {'n_components': [6, 8, 10, 15, 20], 'learning_decay': [.5, .7, .9]}

# Init the Model
lda = LDA()

# Init Grid Search Class
model = GridSearchCV(lda, param_grid=search_params)

# Do the Grid Search
model.fit(vectors)
Out[8]:
GridSearchCV(cv=None, error_score='raise',
       estimator=LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
             evaluate_every=-1, learning_decay=0.7, learning_method=None,
             learning_offset=10.0, max_doc_update_iter=100, max_iter=10,
             mean_change_tol=0.001, n_components=10, n_jobs=1,
             n_topics=None, perp_tol=0.1, random_state=None,
             topic_word_prior=None, total_samples=1000000.0, verbose=0),
       fit_params=None, iid=True, n_jobs=1,
       param_grid={'n_components': [6, 8, 10, 15, 20], 'learning_decay': [0.5, 0.7, 0.9]},
       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',
       scoring=None, verbose=0)
In [9]:
# Best Model
best_lda_model = model.best_estimator_

# Model Parameters
print("Best Model's Params: ", model.best_params_)

# Log Likelihood Score
print("Best Log Likelihood Score: ", model.best_score_)

# Perplexity
print("Model Perplexity: ", best_lda_model.perplexity(vectors))
Best Model's Params:  {'learning_decay': 0.9, 'n_components': 6}
Best Log Likelihood Score:  -180627.34542042803
Model Perplexity:  5906.962760409925
In [24]:
# Get Log Likelyhoods from Grid Search
n_topics = [6, 8, 10, 15, 20]
log_likelyhoods_5 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.5]
log_likelyhoods_7 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.7]
log_likelyhoods_9 = [round(gscore.mean_validation_score) for gscore in model.grid_scores_ if gscore.parameters['learning_decay']==0.9]

# Plot Topics by Log Likelihood
plt.figure(figsize=(10, 6))
plt.plot(n_topics, log_likelyhoods_5, label='0.5')
plt.plot(n_topics, log_likelyhoods_7, label='0.7')
plt.plot(n_topics, log_likelyhoods_9, label='0.9')
plt.title("Choosing Optimal LDA Model")
plt.xlabel("Num Topics")
plt.ylabel("Log Likelyhood Scores")
plt.legend(title='Learning decay', loc='best');

We can see above that the best learning decay was 0.9 and the ideal number of topics was 6, so we'll go with that (unlike the 10 we used in SVD), but we'll stick with the top 8 words like we did with SVD.

In [6]:
# Tweak the two parameters below
number_topics = 6
number_words = 8
In [7]:
# Create and fit the LDA model
lda = LDA(n_components=number_topics, n_jobs=-1, learning_decay=0.9)
% time lda.fit(vectors)
CPU times: user 50.5 s, sys: 3.7 s, total: 54.2 s
Wall time: 2min 3s
Out[7]:
LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
             evaluate_every=-1, learning_decay=0.9, learning_method=None,
             learning_offset=10.0, max_doc_update_iter=100, max_iter=10,
             mean_change_tol=0.001, n_components=6, n_jobs=-1,
             n_topics=None, perp_tol=0.1, random_state=None,
             topic_word_prior=None, total_samples=1000000.0, verbose=0)
In [13]:
# Helper function
def print_topics(model,n_top_words):
    words = vocab
    for topic_idx, topic in enumerate(model.components_):
        print("\nTopic #%d:" % topic_idx)
        print(" ".join([words[i]
                        for i in topic.argsort()[:-n_top_words - 1:-1]]))
In [16]:
# Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, number_words)
Topics found via LDA:

Topic #0:
cons dirty job ok immature experienced tired inflexible

Topic #1:
bosses occasional hit fries uniforms miss floors layoffs

Topic #2:
think really flexibility challenging downsides guests cons loved

Topic #3:
cold weather technology grease body hot outside gross

Topic #4:
work hours management pay time customers hard bad

Topic #5:
drama smell room grow advantage advancement organized costumers

As you can see we get a lot of the same general topics that the SVD gave us with a topic that focuses on management, pay, advancement, etc. In my experience the topics in LDA tend to be a bit easier to interpret, but one of the downsides of all Topic Modeling is that while you've been able to group the words together, it's still relatively difficult to do a lot with them.

Predicting A specific response

Let's look at which topics LDA distributes the final two responses under.

In [21]:
print(df['cons'][21451])
print("-----------")
print(df['cons'][21452])
The pay isn't great and the work is extra
-----------
Too much of focus on wage/dollars % and not guest focused, labor tight at times and added stress. Also, corporate too focused on controlling stores with micromanaging every simple task.
In [29]:
lda.transform(vectors[-2:])
Out[29]:
array([[0.05315866, 0.05373963, 0.05315866, 0.73362024, 0.05316418,
        0.05315863],
       [0.09641838, 0.76467776, 0.03470389, 0.03479211, 0.03470403,
        0.03470382]])

So, according to the LDA model the first comment we see above falls mostly into topic 3 and the second comment falls mostly into topic 1. This could be an interesting topic of exploration to see if human labelers would agree with this.

Data Visualization

One thing we did not focus on with LSA is visualizing the topics. One interesting way to visualize unsupervised learning data is to use another data reduction technique known as t-distributed stochastic neighbor embedding or T-SNE. There is a fun package that I recently discovered that leverages tsne to make interactive visualizations called PyLDAvis. It takes as input the model, your vectors, the vectorizer you want to use and the multi-dimensional scaling technique you want to use, in our case we will be using tsne.

You can hover over each of the topic bubbles and the top-30 most relevant words will change to reflect the topic. Feel free to give it a try :)

In [11]:
import pyLDAvis
import pyLDAvis.sklearn
import matplotlib.pyplot as plt
%matplotlib inline
In [19]:
pyLDAvis.enable_notebook()
panel = pyLDAvis.sklearn.prepare(lda, vectors, vectorizer, mds='tsne')
panel
Out[19]:

As you can see these topics line up perfectly with the topics identified above, and the 1 topic is identical to Topic 4 shown above. However, what immediately stands out is that, while we have 6 distinct topics about 70% of the words used in the responses fall into topic 1, with the remaining 5-6% pretty equally distributed among the other 5 topics. In my experience this is fairly typical, where there is generally one large topic that accounts for 50-60% of the responses and the remaining topics are actually pretty distinct, but account for much less of the overall responses. We'll likely see this again when we do K-means clustering on this same dataset in the next article.

But.... PyLDAvis is a great interactive way to visualize your topics. As you hover over each of the topic bubbles you can see the top 30 most relevant terms as well as the estimated term frequency within each topic (which gives you an idea of the actual vs. expected.

Countvectorizer

There is conflicting research on whether or not one should use tf-idf or counts for LDA. As mentioned above the authors of LDA hint at tf-idf not being necessary due to the probabilistic nature of LDA, but others have found through research that tf-idf enhances the interpretability of topics, so I figured we'd just try both and compare :)

In [4]:
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(df['cons']).todense()
vectors.shape
Out[4]:
(21453, 11986)
In [13]:
# Tweak the two parameters below
number_topics = 6
number_words = 8
In [14]:
vocab = np.array(vectorizer.get_feature_names())
# Load the LDA model from sk-learn
from sklearn.decomposition import LatentDirichletAllocation as LDA
In [15]:
# Create and fit the LDA model
lda = LDA(n_components=number_topics, n_jobs=-1, learning_decay=0.9)
lda.fit(vectors)
/home/nick/miniconda3/envs/tensorflow_cpu/lib/python3.6/site-packages/sklearn/decomposition/online_lda.py:536: DeprecationWarning: The default value for 'learning_method' will be changed from 'online' to 'batch' in the release 0.20. This warning was introduced in 0.18.
  DeprecationWarning)
Out[15]:
LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
             evaluate_every=-1, learning_decay=0.9, learning_method=None,
             learning_offset=10.0, max_doc_update_iter=100, max_iter=10,
             mean_change_tol=0.001, n_components=6, n_jobs=-1,
             n_topics=None, perp_tol=0.1, random_state=None,
             topic_word_prior=None, total_samples=1000000.0, verbose=0)
In [16]:
# Helper function
def print_topics(model,n_top_words):
    words = vocab
    for topic_idx, topic in enumerate(model.components_):
        print("\nTopic #%d:" % topic_idx)
        print(" ".join([words[i]
                        for i in topic.argsort()[:-n_top_words - 1:-1]]))
        
# Print the topics found by the LDA model
print("Topics found via LDA:")
print_topics(lda, number_words)
Topics found via LDA:

Topic #0:
customers rude great food management people work fast

Topic #1:
work life company employees balance cons management think

Topic #2:
shifts experience scheduling late little coworkers work opportunities

Topic #3:
time work hours management don hard job schedule

Topic #4:
management pay low hours poor bad work lack

Topic #5:
hours team work store members managers help manager
In [17]:
pyLDAvis.enable_notebook()
panel = pyLDAvis.sklearn.prepare(lda, vectors, vectorizer, mds='tsne')
panel
Out[17]:
In [19]:
# predictions

print(df['cons'][21451])
print("-----------")
print(df['cons'][21452])


lda.transform(vectors[-2:])
The pay isn't great and the work is extra
-----------
Too much of focus on wage/dollars % and not guest focused, labor tight at times and added stress. Also, corporate too focused on controlling stores with micromanaging every simple task.
Out[19]:
array([[0.22598168, 0.02796768, 0.02780984, 0.48077479, 0.02829221,
        0.20917379],
       [0.00942719, 0.31045117, 0.06957802, 0.06857515, 0.53262556,
        0.00934291]])

In my opinion counts do seem to create more interpretable topics than tf-idf. The predicted categories for the two responses seem to align better and there is even more of a distribution between topics than the tf-idf predictions.

But, this may be specific to the problem, so it may make sense to try both when doing LDA.

Recap:

img

LDA is another option to use for topic modeling and in general I'd consider it the most popular option for topic modeling in the data science community. The next and final article on topic modeling will focus on K-means clustering as a 3rd option for clustering data without labels into distinct groups.

In [ ]: