CS 662 Assignment 6: Naive Bayes Spam Classificaion
Due: 4/22/2013
Naive Bayes Spam Classification
In this problem, you will implement a Naive Bayes classifier in Python
that can distinguish between spam and non-spam, or "ham".
Your program should be able to train on a set of spam and a set of
"ham." This training should include counting the frequency of each
token in both spam and ham corpora.
Your program should then be able to classify an unseen email as either
spam or ham by computing the MAP hypothesis:
P(spam | t1, t2, ...,tn) = alpha * P(t1,t2,tn | spam)P(spam)
Which we'll estimate as:
alpha * P(t1 | spam)P(t2 | spam) ... P(spam)
You may build your program however you like, although we will still
expect you to use the good programming and design practices you have
learned throughout the semester. The only requirement is that we be
able to run your program exactly like this:
python ./nb.py --hamtrain=dir1 --spamtrain=dir2 --hamtest=dir3 --spamtest=dir4
Where dir1-4 are directories containing ham and spam emails used for
training and testing.
Your program must print out its results in the
following format:
Size of ham training set: 500 emails
Size of spam training set: 700 emails:
Percentage of ham classified correctly: 98.2
Percentage of spam classified correctly: 97.0
Total accuracy: 97.5
False Positives: 1.8
There are several public repositories of email that can be used for
training and testing. We will be using the SpamAssassin
public corpus to evaluate your classifier.
The SpamAssassin corpus contains spam, easy ham, and hard ham. We'll
use the hard ham to test your classifier.
Details
You will find that there are a lot of decisions and tweaks you can
make to influence the performance of your classifier. For example,
should you look at all terms, or just English words? Should you treat
headers differently? When classifying an email, should you use all of
its words, or just the most significant? What are reasonable priors
for spam and non-spam? What about trying to parse the email and only
use some chunks? These decisions are up to you: you are encouraged to
experiment as much as possible.
Naive Bayes is a popular approach to spam filtering; you will find
lots of resources on the Web, including:
There are many other resources as well. You are welcome to use the
ideas (but NOT the code) from any outside source, with the following
caveat:
You MUST give appropriate credit to any ideas you discover
elsewhere. For example, if you read Graham's article and notice that
he only uses the 15 most significant words in classifying an unseen
email and decide to take this approach, you should indicate in your
report (see below) that this idea is from Graham's
article. Include author and URL whenever possible. Students who use
other people's ideas without proper attribution will receive an
automatic zero.
Grading
This part of your assignment will be graded as follows:
- 60 points: correctness, completeness, and style. The usual sorts of
things.
- 30 points: performance. To compute your score on this, we will run
each student's classifier on a dataset of our choosing and compute
the following metric: accuracy - percentage of false positives. We will
then score your performance as follows:
- More than 1 standard devation above class average: 30
points. (or above average, if >1 stdev is not possible)
- Within one standard deviation of class average: 28 points.
- More than 1 standard deviation below class average: 20
points.
- 10 points: Evaluation. You should check into subversion a short (approx. 1 page)
document that describes the details of your approach and the
accuracy of your classifier.