A Quick Easy Guide to Deep Learning with Java – Building Neural Network

In this article, we will take a brief look at Deep Learning with Java. We are going to build our first simple neural network using Open-Source, Distributed, Deep Learning Library for the JVM Deeplearning4j or DL4J

Introduction to Deep Learning With Java:

Machine Learning is taking over the web. It is trying to penetrate almost every aspect of human life. Deep Learning is one of the branches of machine learning. It deals with algorithms and processing that are inspired by the structure and function of the human brain and neural network.

When your network model has more than 2 layers (including input and output layer), it’s considered as Deep Neural Net.

Neural networks have multiple layers and each layer has many interconnected nodes. The nodes are the places where actual computation happens and output is fed to the next node. Each node is associated with an activation function that decides the output of the node.

When you build your neural net, you have to train it with large data set. This training will make you model to predict the most accurate results when you supply the input dataset.

For our demonstration purpose, we will use the classic problem of classification of Iris flowers.

Softwares used

  • Spring Boot 1.5.9.RELEASE
  • Deeplearning4j
  • Java 8
  • Maven

Maven Dependencies

Preparing the Dataset

As mentioned earlier, every neural network needs data to train. For our model, we will use the limited data available at this https://archive.ics.uci.edu/ml/datasets/iris

The data is in the form of CSV file. We have a total of 150 records. Out of that, we will choose random 3 for our testing and the rest of the data we will use to train our model. So I have created 2 files for this data. iris.csv and iris-test.csv. The data in a file is like:

The last column is a classifier and the classification is:

Reading the Data

Deeplearning4j used DataVec libraries to read the data from the different sources and convert them to machine-readable format i.e. Numbers. These numbers are called Vectors and the process is called Vectorization.

In our example we are dealing with CSV, so we will use CSVRecordReader. We will put together a simple utility function that accepts file path, batch size, label index, and a number of classes.

Above function will be used to read training as well as test data.

The training data we have for iris flowers are labeled which means someone took the pain and classified the training data into 3 different classes. So we need to tell our CSV reader what column out of all data is for a label. Also, we need to specify how many total possible classes are there along with batch size to read.

Note above that I have shuffled the training data so that our model won’t get affected by the ordering of the data. We will not shuffle of test data as we need to refer them later in the example.

For our demonstration and testing purpose, we need to convert this test CSV data to an Object. We will define a simple class to map all those columns in test data:

Now let’s write a simple utility method to get our objects.

This will read and create an object from your dataset:


Normalizing the Data

In Deep Learning, all the data has to be converted to a specific range. either 0,1 or -1,1. This process is called normalization.

Building the Network Model

Deeplearning4j provides an elegant way to build your neural networks with NeuralNetConfiguration. Though it looks simple, a lot is happening in the background and there are many parameters to try and build your model with.

iterations() – This specifies the number of optimization iterations performing multiple passes on the training set.

activation() – It is a function that runs inside a node to determine its output. DL4j supports many such activation functions

weightInit() –  This method specifies one of the many ways to set up the initial weights for the network.

learningRate() – This is one of the crucial parameters to set. This will decide how your model learns to go near the desired results. You need to try many parameters before you arrive at almost the correct results.

regularization() – Sometimes during training the model, it runs into overfitting and produces bad results for actual data. So we have to regularize it and penalize it if it overfits.

Train the Network Model

The training model is as easy as calling the fit method on your model. You can also set listeners to log the scores.

Evaluating the Model

For evaluation, you need to provide possible classes that can be one of the outcomes. You get the features (data excluding the labels) from your test data and pass that through your model. When you print stats you will get something like:

The model does not predict the actual class for you. It only assigns high values to a class which it thinks is more correct. If I print the output it will be like

So what does it tell? we have mentioned that there could be 3 classes for each of the results. So our model has given us INDArray with some values assigned to in each of the indexes (0,1,2). These indices correspond to classes we defined earlier.

Classify the Results

We will write a simple utility to get the index of maximum value for a particular row. With this index, we will fetch our actual class name. Remember we did not shuffle the test data. That is because we need to map each of the test data to its output prediction.

The above utility methods will populate the Iris object with predicted class. Below is the output for that.


In this article, we’ve built a simple yet powerful neural network by using the deeplearning4j library.

The complete source code is available to download from our GitHub repo.

Download Code



You may also like :

Simple Text Classification using Keras Deep Learning Python Library – Step By Step Guide

No Comments

Add a Comment

Your email address will not be published. Required fields are marked *