Spam detection

This dataset can be downloaded here

Summary

The identification of the text of spam messages in the claims is a very hard and time-consuming task. Can we build a model to detect when a message is ham or spam? This data set is a compendium from different sources, of SMS classified as Spam /Ham. We will need to build a model that easily can detect when a SMS is relevant or not. Similarly to what, nowadays, spam filters do, NLP tools and techniques will help to do it.

  • Observations: 5,574
  • One label + one string as feature

Challenges: Natural Language Processing with Python

Data

5,574 instances

Source

Tiago A. Almeida (talmeida ufscar.br) Department of Computer Science Federal University of Sao Carlos (UFSCar) Sorocaba, Sao Paulo - Brazil

José María Gómez Hidalgo (jmgomezh yahoo.es) R&D Department Optenet Las Rozas, Madrid - Spain

Data Set Information

This corpus has been collected from free or free for research sources at the Internet:

  1. A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is: Web Link.
  2. A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at Web Link.
  3. A list of 450 SMS ham messages collected from Caroline Tag’s PhD Thesis available at Web Link.
  4. Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at: Web Link.

Format

The files contain one message per line. Each line is composed by two columns: one with label (ham or spam) and other with the raw text. Here are some examples:

  • ham What you doing?how are you?
  • ham Ok lar… Joking wif u oni…
  • ham dun say so early hor… U c already then say…
  • ham MY NO. IN LUTON 0125698789 RING ME IF UR AROUND! H*
  • ham Siva is in hostel aha:-.
  • ham Cos i was out shopping wif darren jus now n i called him 2 ask wat present he wan lor. Then he started guessing who i was wif n he finally guessed darren lor.
  • spam FreeMsg: Txt: CALL to No: 86888 & claim your reward of 3 hours talk time to use from your phone now! ubscribe6GBP/ mnth inc 3hrs 16 stop?txtStop
  • spam Sunshine Quiz! Win a super Sony DVD recorder if you canname the capital of Australia? Text MQUIZ to 82277. B
  • spam URGENT! Your Mobile No 07808726822 was awarded a L2,000 Bonus Caller Prize on 02/09/03! This is our 2nd attempt to contact YOU! Call 0871-872-9758 BOX95QU

Note: messages are not chronologically sorted.

NLP for filtering Spam SMS

import pandas as pd
messages = pd.read_csv("smsspamcollection/SMSSpamCollection",
                       sep ="\t", names = ["label", "message"])
messages.head()
label message
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...

Exploratory Data Analysis

messages.groupby("label").describe()
message
count unique top freq
label
ham 4825 4516 Sorry, I'll call later 30
spam 747 653 Please call our customer service representativ... 4

Let’s do some feature engineering and make a new column to detect how long the text message is’

messages["length"] = messages["message"].apply(len)
messages.head()
label message length
0 ham Go until jurong point, crazy.. Available only ... 111
1 ham Ok lar... Joking wif u oni... 29
2 spam Free entry in 2 a wkly comp to win FA Cup fina... 155
3 ham U dun say so early hor... U c already then say... 49
4 ham Nah I don't think he goes to usf, he lives aro... 61

Data visualization

import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
messages["length"].plot(bins = 40, kind = "hist")

messages.length.describe()
count    5572.000000
mean       80.489950
std        59.942907
min         2.000000
25%        36.000000
50%        62.000000
75%       122.000000
max       910.000000
Name: length, dtype: float64

910 characters!!! I need to see this message

messages[messages["length"]==910]["message"].iloc[0]
"For me the love should start with attraction.i should feel that I need her every time around me.she should be the first thing which comes in my thoughts.I would start the day and end it with her.she should be there every time I dream.love will be then when my every breath has her name.my life should happen around her.my life will be named to her.I would cry for her.will give all my happiness and take all her sorrows.I will be ready to fight with anyone for her.I will be in love when I will be doing the craziest things for her.love will be when I don't have to proove anyone that my girl is the most beautiful lady on the whole planet.I will always be singing praises for her.love will be when I start up making chicken curry and end up makiing sambar.life will be the most beautiful then.will get every morning and thank god for the day because she is with me.I would like to say a lot..will tell later.."
messages.hist(column = "length", by = "label", bins = 50, figsize = (12, 4))

It seems that chances that a message is spam are bigger when the string length is longer

Text pre-processing

First step will be writting a function that will split a message into its individual words and return a list. I will also remove very common words (“the”, “a”, etc…). For this, will use NLTK library.

import string
from nltk.corpus import stopwords
stopwords.words("english")[0:10]#explore some stopwords
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your']
string.punctuation
'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
def text_process(mess):
    #Check characters to see if there are punctuation
    nopunc = [char for char in mess if char not in string.punctuation]

    #Join the characters again
    nopunc = "".join(nopunc)

    #Remove stopwords
    return [word for word in nopunc.split() if word.lower() not in stopwords.words ("english")]

Now, I need to tokenize the terms. We are creating “lemmas”.

messages["message"].head(5).apply(text_process)
0    [Go, jurong, point, crazy, Available, bugis, n...
1                       [Ok, lar, Joking, wif, u, oni]
2    [Free, entry, 2, wkly, comp, win, FA, Cup, fin...
3        [U, dun, say, early, hor, U, c, already, say]
4    [Nah, dont, think, goes, usf, lives, around, t...
Name: message, dtype: object

Vectorization

Now, that we already have lemmas, we need to convert each of those messages into a vector to work with. It is donde in 3 steps:

1.Term frequency. Which is counting how many times a word ocurr in each message. 2.Inverse Term frequency. Weight the count, so that frequent tokens get lower weight. Contrary to what intuation might suggest, most frequent words (“I”, “a”,…) are the less important giving meaning to the string.

  1. Normalize the vectors to unit length, to abstract from the originanl text.

Let’s begin!

from sklearn.feature_extraction.text import CountVectorizer
bow_transformer = CountVectorizer(analyzer = text_process).fit(messages["message"])

#Print total number of vocab words
print(len(bow_transformer.vocabulary_))
11425

Let’s take on text message and get its bag-of-words counts as a vector, putting to use our new bow_transformer

message4 = messages["message"][3]
print(message4)
U dun say so early hor... U c already then say...

Now, its vector representation

bow4 = bow_transformer.transform([message4])
print(bow4)
print(bow4.shape)
  (0, 4068)	2
  (0, 4629)	1
  (0, 5261)	1
  (0, 6204)	1
  (0, 6222)	1
  (0, 7186)	1
  (0, 9554)	2
(1, 11425)

this means, there are 7 unqiue words in this message after removing stop words. Two of them appear twice and the rest just once. Let’s generalize!

messages_bow = bow_transformer.transform(messages["message"])
print("Shape of Sparse matrix: ", messages_bow.shape)
print("Amount of Non-Zero ocurrences ", messages_bow.nnz)
Shape of Sparse matrix:  (5572, 11425)
Amount of Non-Zero ocurrences  50548
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print("sparsity {}".format(round(sparsity)))
sparsity 0

After counting normalization can be done with TF-IDF (Term Frequency- Inverse document frequency)

from sklearn.feature_extraction.text import TfidfTransformer
tfid_transformer = TfidfTransformer().fit(messages_bow)
print(tfid_transformer.idf_[bow_transformer.vocabulary_["u"]]) #check word "u" frequency
print(tfid_transformer.idf_[bow_transformer.vocabulary_["university"]]) #check word "university" frequency
3.28005242674
8.5270764989

transforming, now, the entire bag of words…

messages_tfidf = tfid_transformer.transform(messages_bow)
print(messages_tfidf.shape)
(5572, 11425)

Training a model

I am going to use Naive Bayes classifier algorithm. It seems to me that it is a good choice as at the end we need a good way to compute chances (probability), of classifying as spam ham.
from sklearn.naive_bayes import MultinomialNB
spam_detect_model = MultinomialNB().fit(messages_tfidf, messages["label"])

Model Evaluation

all_predictions = spam_detect_model.predict(messages_tfidf)
print(all_predictions)
['ham' 'ham' 'spam' ..., 'ham' 'ham' 'ham']
from sklearn.metrics import classification_report
print (classification_report(messages["label"], all_predictions))
             precision    recall  f1-score   support

        ham       0.98      1.00      0.99      4825
       spam       1.00      0.85      0.92       747

avg / total       0.98      0.98      0.98      5572

Not bad! but as I did not train/test the model, it is not clear if it is overfitted and the model just learn its way or for the contrary, we have a pretty good model to predict ham/Spam. Because of that I am going to repeat the model

Train Test split

from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = train_test_split(messages["message"],
                                                               messages["label"], test_size = 0.3)
print(len(msg_train), len(msg_test), len(msg_train) + len(msg_test) )
3900 1672 5572

Create a pipeline

from sklearn.pipeline import Pipeline

pipeline = Pipeline([
    ('bow', CountVectorizer(analyzer=text_process)),  # strings to token integer counts
    ('tfidf', TfidfTransformer()),  # integer counts to weighted TF-IDF scores
    ('classifier', MultinomialNB()),  # train on TF-IDF vectors w/ Naive Bayes classifier
])
pipeline.fit(msg_train,label_train)
predictions = pipeline.predict(msg_test)
print(classification_report(predictions,label_test))
             precision    recall  f1-score   support

        ham       1.00      0.96      0.98      1527
       spam       0.71      1.00      0.83       145

avg / total       0.98      0.97      0.97      1672

Done!