simple-neural-network

2023-02-15

Simple neural network

Upstream URL

github.com/glv2/simple-neural-network

Author

Guillaume Le Vaillant

License

GPL-3
README
Simple Neural Network

1Description

simple-neural-network is a Common Lisp library for creating, training and using basic neural networks. The networks created by this library are feedforward neural networks trained using backpropagation. The activation function used by the neurons is A(x) = 1.7159 * tanh(0.66667 * x).

simple-neural-network depends on the cl-store and lparallel libraries.

2License

simple-neural-network is released under the GPL-3 license. See the LICENSE file for details.

3API

The functions are in the simple-neural-network package. You can use the shorter snn nickname if you prefer.

The library works with double floats. Your inputs and targets must therefore be vectors of double-float numbers. For better results, they should also be normalized to contain values between -1 and 1. The find-normalization helper function can be used to generate normalization and denormalization functions from sample inputs, but it might not be adapted to every use case.

If lparallel:*kernel* is set or bound, some computations will be done in parallel. This is only useful if the network is big enough, because the overhead of task management can instead slow things down for small networks.

(create-neural-network input-size output-size &rest hidden-layers-sizes)

Create a neural network having input-size inputs, output-size outputs, and optionally some intermediary layers whose sizes are specified by hidden-layers-sizes. The neural network is initialized with random weights and biases.

(train neural-network inputs targets learning-rate
       &key batch-size momentum-coefficient)

Train the neural-network with the given learning-rate and momentum-coefficient using some inputs and targets. The weights are updated every batch-size inputs.

(predict neural-network input &optional output)

Return the output computed by the neural-network for a given input. If output is not nil, the output is written in it, otherwise a new vector is allocated.

(store neural-network place)

Store the neural-network to place, which must be a stream or a pathname-designator.

(restore place)

Restore the neural network stored in place, which must be a stream or a pathname-designator.

(copy neural-network)

Return a copy of the neural-network.

(index-of-max-value values)

Return the index of the greatest value in values.

(same-category-p output target)

Return t if calls to index-of-max-value on output and target return the same value, and nil otherwise. This function is only useful when the neural network was trained to classify the inputs in several categories (when targets contain a 1 for the correct category and and -1 for all the other categories).

(accuracy neural-network inputs targets &key test)

Return the rate of good guesses computed by the neural-network when testing it with some inputs and targets. test must be a function taking an output and a target returning t if the output is considered to be close enough to the target, and nil otherwise. same-category-p is used by default.

(mean-absolute-error neural-network inputs targets)

Return the mean absolute error on the outputs computed by the neural-network when testing it with some inputs and targets.

(find-normalization inputs)

Return four values. The first is a normalization function taking an input and returning a normalized input. Applying this normalization function to the inputs gives a data set in which each variable has mean 0 and standard deviation 1. The second is a denormalization function that can compute the original input from the normalized one. The third is the code of the normalization function. The fourth is the code of the denormalization function.

(find-learning-rate neural-network inputs targets
                    &key batch-size momentum-coefficient epochs
                         iterations minimum maximum)

Return the best learing rate found in iterations steps of dichotomic search (between minimum and maximum). In each step, the neural-network is trained epochs times using some inputs, targets, batch-size and momentum-coefficient.

(neural-network-layers neural-network)
(neural-network-weights neural-network)
(neural-network-biases neural-network)

These functions are SETFable. They can be used to get or set the neuron values, the weights and the biases of the neural-network, which are represented as a list of vectors where each vector contains the values (double-float) for a layer.

4Examples

Neural network for the XOR function:

(asdf:load-system "simple-neural-network")

(defun normalize (input)
  (map 'vector (lambda (x) (if (= x 1) 1.0d0 -1.0d0)) input))

(defun denormalize (output)
  (if (plusp (aref output 0)) 1 0))

(defvar inputs (mapcar #'normalize '(#(0 0) #(0 1) #(1 0) #(1 1))))
(defvar targets (mapcar #'normalize '(#(0) #(1) #(1) #(0))))
(defvar nn (snn:create-neural-network 2 1 4))
(dotimes (i 1000)
  (snn:train nn inputs targets 0.1))

(denormalize (snn:predict nn (normalize #(0 0))))
-> 0

(denormalize (snn:predict nn (normalize #(1 0))))
-> 1

(denormalize (snn:predict nn (normalize #(0 1))))
-> 1

(denormalize (snn:predict nn (normalize #(1 1))))
-> 0

Neural network for the MNIST dataset, using parallelism (2 threads):

;; Note: the mnist-load function used below is defined in "tests/tests.lisp".

(setf lparallel:*kernel* (lparallel:make-kernel 2))
(defvar nn (snn:create-neural-network 784 10 128))
(multiple-value-bind (inputs targets) (mnist-load :train)
  (dotimes (i 3)
    (snn:train nn inputs targets 0.003d0)))

(multiple-value-bind (inputs targets) (mnist-load :test)
  (snn:accuracy nn inputs targets))
-> 1911/2000

5Tests

The tests require the fiveam and chipz libraries. They can be run with:

(asdf:test-system "simple-neural-network")

Dependencies (5)

  • chipz
  • cl-store
  • fiveam
  • lparallel
  • uiop

Dependents (0)

    • GitHub
    • Quicklisp