A person involved in sports-related activities might have an online buying pattern similar to this:

image.png

If we can represent each of these products by a vector, then we can easily find similar products. So, if a user is checking out a product online, then we can easily recommend him/her similar products by using the vector similarity score between the products.

Data gathering and understanding

!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00352/Online%20Retail.xlsx
df = pd.read_excel('Online Retail.xlsx')
df.head()
InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country
0 536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850.0 United Kingdom
1 536365 71053 WHITE METAL LANTERN 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom
2 536365 84406B CREAM CUPID HEARTS COAT HANGER 8 2010-12-01 08:26:00 2.75 17850.0 United Kingdom
3 536365 84029G KNITTED UNION FLAG HOT WATER BOTTLE 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom
4 536365 84029E RED WOOLLY HOTTIE WHITE HEART. 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom

Given below is the description of the fields in this dataset:

  1. InvoiceNo: Invoice number, a unique number assigned to each transaction.

  2. StockCode: Product/item code. a unique number assigned to each distinct product.

  3. Description: Product description

  4. Quantity: The quantities of each product per transaction.

  5. InvoiceDate: Invoice Date and time. The day and time when each transaction was generated.

  6. CustomerID: Customer number, a unique number assigned to each customer.

Data Preprocessing

df.isnull().sum()
InvoiceNo           0
StockCode           0
Description      1454
Quantity            0
InvoiceDate         0
UnitPrice           0
CustomerID     135080
Country             0
dtype: int64

Since we have sufficient data, we will drop all the rows with missing values.

df.dropna(inplace=True)

# again check missing values
df.isnull().sum()
InvoiceNo      0
StockCode      0
Description    0
Quantity       0
InvoiceDate    0
UnitPrice      0
CustomerID     0
Country        0
dtype: int64
df['StockCode']= df['StockCode'].astype(str)
customers = df["CustomerID"].unique().tolist()
len(customers)
4372

There are 4,372 customers in our dataset. For each of these customers we will extract their buying history. In other words, we can have 4,372 sequences of purchases.

Data Preparation

It is a good practice to set aside a small part of the dataset for validation purpose. Therefore, we will use data of 90% of the customers to create word2vec embeddings. Let's split the data.

random.shuffle(customers)

# extract 90% of customer ID's
customers_train = [customers[i] for i in range(round(0.9*len(customers)))]

# split data into train and validation set
train_df = df[df['CustomerID'].isin(customers_train)]
validation_df = df[~df['CustomerID'].isin(customers_train)]

Let's create sequences of purchases made by the customers in the dataset for both the train and validation set.

purchases_train = []

# populate the list with the product codes
for i in tqdm(customers_train):
    temp = train_df[train_df["CustomerID"] == i]["StockCode"].tolist()
    purchases_train.append(temp)
100%|██████████| 3935/3935 [00:05<00:00, 664.97it/s]
purchases_val = []

# populate the list with the product codes
for i in tqdm(validation_df['CustomerID'].unique()):
    temp = validation_df[validation_df["CustomerID"] == i]["StockCode"].tolist()
    purchases_val.append(temp)
100%|██████████| 437/437 [00:00<00:00, 1006.50it/s]

Build word2vec Embeddings for Products

model = Word2Vec(window = 10, sg = 1, hs = 0,
                 negative = 10, # for negative sampling
                 alpha=0.03, min_alpha=0.0007,
                 seed = 14)

model.build_vocab(purchases_train, progress_per=200)

model.train(purchases_train, total_examples = model.corpus_count, 
            epochs=10, report_delay=1)
(3657318, 3696290)
model.save("word2vec_2.model")

As we do not plan to train the model any further, we are calling init_sims(), which will make the model much more memory-efficient

model.init_sims(replace=True)
print(model)
Word2Vec(vocab=3153, size=100, alpha=0.03)

Now we will extract the vectors of all the words in our vocabulary and store it in one place for easy access

X = model[model.wv.vocab]

X.shape
(3153, 100)

Visualize word2vec Embeddings

It is always quite helpful to visualize the embeddings that you have created. Over here we have 100 dimensional embeddings. We can't even visualize 4 dimensions let alone 100. Therefore, we are going to reduce the dimensions of the product embeddings from 100 to 2 by using the UMAP algorithm, it is used for dimensionality reduction.

import umap

cluster_embedding = umap.UMAP(n_neighbors=30, min_dist=0.0,
                              n_components=2, random_state=42).fit_transform(X)

plt.figure(figsize=(10,9))
plt.scatter(cluster_embedding[:, 0], cluster_embedding[:, 1], s=3, cmap='Spectral');

Every dot in this plot is a product. As you can see, there are several tiny clusters of these datapoints. These are groups of similar products.

Generate and validate recommendations

We are finally ready with the word2vec embeddings for every product in our online retail dataset. Now our next step is to suggest similar products for a certain product or a product's vector.

Let's first create a product-ID and product-description dictionary to easily map a product's description to its ID and vice versa.

products = train_df[["StockCode", "Description"]]

# remove duplicates
products.drop_duplicates(inplace=True, subset='StockCode', keep="last")

# create product-ID and product-description dictionary
products_dict = products.groupby('StockCode')['Description'].apply(list).to_dict()
products_dict['84029E']
['RED WOOLLY HOTTIE WHITE HEART.']

We have defined the function below. It will take a product's vector (n) as input and return top 6 similar products.

Let's try out our function by passing the vector of the product '90019A' ('SILVER M.O.P ORBIT BRACELET')

similar_products(model['90019A'])
[('SILVER M.O.P ORBIT DROP EARRINGS', 0.7879312634468079),
 ('AMBER DROP EARRINGS W LONG BEADS', 0.7682332992553711),
 ('JADE DROP EARRINGS W FILIGREE', 0.761816143989563),
 ('DROP DIAMANTE EARRINGS PURPLE', 0.7489826679229736),
 ('SILVER LARIAT BLACK STONE EARRINGS', 0.7389366626739502),
 ('WHITE VINT ART DECO CRYSTAL NECKLAC', 0.7352254390716553)]

Cool! The results are pretty relevant and match well with the input product. However, this output is based on the vector of a single product only. What if we want recommend a user products based on the multiple purchases he or she has made in the past?

One simple solution is to take average of all the vectors of the products he has bought so far and use this resultant vector to find similar products. For that we will use the function below that takes in a list of product ID's and gives out a 100 dimensional vector which is mean of vectors of the products in the input list.

def aggregate_vectors(products):
    product_vec = []
    for i in products:
        try:
            product_vec.append(model[i])
        except KeyError:
            continue
        
    return np.mean(product_vec, axis=0)

If you can recall, we have already created a separate list of purchase sequences for validation purpose. Now let's make use of that.

The length of the first list of products purchased by a user is 314. We will pass this products' sequence of the validation set to the function aggregate_vectors.

Well, the function has returned an array of 100 dimension. It means the function is working fine. Now we can use this result to get the most similar products. Let's do it.

similar_products(aggregate_vectors(purchases_val[0]))
[('WHITE SPOT BLUE CERAMIC DRAWER KNOB', 0.6860978603363037),
 ('RED SPOT CERAMIC DRAWER KNOB', 0.6785424947738647),
 ('BLUE STRIPE CERAMIC DRAWER KNOB', 0.6783121824264526),
 ('BLUE SPOT CERAMIC DRAWER KNOB', 0.6738985776901245),
 ('CLEAR DRAWER KNOB ACRYLIC EDWARDIAN', 0.6731897592544556),
 ('RED STRIPE CERAMIC DRAWER KNOB', 0.6667704582214355)]

As it turns out, our system has recommended 6 products based on the entire purchase history of a user. Moreover, if you want to get products suggestions based on the last few purchases only then also you can use the same set of functions.

Below we are giving only the last 10 products purchased as input.

similar_products(aggregate_vectors(purchases_val[0][-10:]))
[('BLUE SPOT CERAMIC DRAWER KNOB', 0.7394766807556152),
 ('RED SPOT CERAMIC DRAWER KNOB', 0.7364704012870789),
 ('WHITE SPOT BLUE CERAMIC DRAWER KNOB', 0.7347637414932251),
 ('ASSORTED COLOUR BIRD ORNAMENT', 0.7345550060272217),
 ('RED STRIPE CERAMIC DRAWER KNOB', 0.7305896878242493),
 ('WHITE SPOT RED CERAMIC DRAWER KNOB', 0.6979628801345825)]