Simple Text Classification using Keras Deep Learning Python Library – Step By Step Guide

Deep Learning is everywhere. All organizations big or small, trying to leverage the technology and invent some cool solutions. In this article, we will do a text classification using Keras which is a Deep Learning Python Library.

Why Keras?

There are many deep learning frameworks available in the market like TensorFlow, Theano. So why do I prefer Keras? Well, the most important reason is its Simplicity. Keras is a top-level API library where you can use any framework as your backend.

By default it recommends TensorFlow. So, in short, you get the power of your favorite deep learning framework and you keep the learning curve to minimal. Keras is easy to learn and easy to use.

Text Classification Using Keras:

Let’s see step by step:

Softwares used

Along with this, I have also installed a few needed python packages like numpy, scipy, scikit-learn, pandas, etc.

Preparing Dataset

For our demonstration purpose, we will use 20 Newsgroups data set. Which is freely available over the internet. The data is categorized into 20 categories and our job will be to predict the categories. Few of the categories are very closely related. As shown below:

Generally, for deep learning, we split training and test data. We will do that in our code, apart from that we will also keep a couple of files aside so we can feed that unseen data to our model for actual prediction. We have separated data into 2 directories 20news-bydate-train and 20news-bydate-test

Importing Required Packages

Loading data from files to Python variables

In our case data is not available as CSV. We have a text data file and the directory in which the file is kept in our label or category. So we will first iterate through the directory structure and create data set that can be further utilized in training our model.

We will use scikit-learn load_files method. This method can give us raw data as well as the labels and label indices. For our example, we will not load data at one go. We will iterate over files and prepare a DataFrame

At the end of the above code, we will have a data frame that has a filename, category, actual data.

Note: The above approach to make data available for training worked, as its volume is not huge. If you need to train on a huge dataset then you have to consider the BatchGenerator approach. In this approach, the data will be fed to your model in small batches.

Split Data for Train and Test

We will keep 80% of our data for training and the remaining 20% for testing and validations.

Tokenize and Prepare Vocabulary

When we classify texts we first pre-process the text using Bag Of Words method. Now the Keras comes with inbuilt Tokenizer which can be used to convert your text into a numeric vector. The text_to_matrix method above does exactly the same.

Pre-processing Output Labels / Classes

As we have converted our text to numeric vectors, we also need to make sure our labels are represented in the numeric format accepted by the neural network model. The prediction is all about assigning the probability to each label.  We need to convert our labels to one hot vector

scikit-learn has a LabelBinarizer class which makes it easy to build these one-hot vectors.

Build Keras Model and Fit

Keras Sequential model API lets us easily define our model. It provides easy configuration for the shape of our input data and the type of layers that make up our model. I came up with the above model after some trials with vocab size, epochs, and Dropout layers.

Here is some snippet of fit and test accuracy

Evaluate model

After the Fit methods train our data set, we will evaluate our model as shown above. Also above we tried to predict a few files from the test set. The text_labels are generated by our LabelBinarizer

Confusion Matrix

The confusion matrix is one of the best ways to visualize the accuracy of your model. Check below the matrix from our training:Confusion Matrix

Saving the Model

Usually, the use case for deep learning is like training of data happens in different session and prediction happens using the trained model. The below code saves the model as well as tokenizer. We have to save our tokenizer because it is our vocabulary. The same tokenizer and vocabulary have to be used for accurate prediction.

Keras doesn’t have any utility method to save Tokenizer along with the model. We have to serialize it separately.

Loading the Keras model

The prediction environment also needs to be aware of the labels and that too in the exact order they were encoded. You can get it and store for future reference using

Prediction

As mentioned earlier, we have set aside a couple of files for actual testing. We will read them, tokenize using our loaded tokenizer and predict the probable category

Output

As we know the directory name is the true label for the file, the prediction above is accurate.

Conclusion

In this article, we’ve built a simple yet powerful neural network by using the Keras Python library. We have also seen how easy it is to load the saved model and do the prediction for completely unseen data.

The complete source code is available to download from our GitHub repo.

Download Code
18 Comments
  1. Harj Udin
    December 27, 2021 | Reply
  2. mel
    November 14, 2019 | Reply
    • Pavan
      November 15, 2019 | Reply
  3. Nacho
    September 24, 2019 | Reply
  4. Dazzil
    June 26, 2019 | Reply
  5. Li Lu
    May 26, 2019 | Reply
  6. reem
    April 11, 2019 | Reply
  7. saria
    March 5, 2019 | Reply
  8. Kaleb Johnson
    December 5, 2018 | Reply
    • Pavan
      December 12, 2018 | Reply
  9. Sabiha Barlaskar
    October 6, 2018 | Reply
  10. Vineet Jaiswal
    September 14, 2018 | Reply
    • Pavan
      September 14, 2018 | Reply
  11. Nangio
    August 20, 2018 | Reply
    • Mac
      October 29, 2018 | Reply
  12. August 14, 2018 | Reply
  13. Hajer
    August 12, 2018 | Reply
    • Pavan
      August 12, 2018 | Reply

Add a Comment

Your email address will not be published. Required fields are marked *