Nano Dimension has filed a patent for a device, system, and method to efficiently store a sparse neural network. The invention involves obtaining weights that represent connections between artificial neurons in different layers and storing them with unique indexes. Only non-zero weights, which represent connections between pairs of neurons, are stored. This method aims to optimize the storage of sparse neural networks. GlobalData’s report on Nano Dimension gives a 360-degree view of the company including its patenting strategy. Buy the report here.
According to GlobalData’s company profile on Nano Dimension, spacecraft 3D Printing was a key innovation area identified from patents. Nano Dimension's grant share as of June 2023 was 1%. Grant share is based on the ratio of number of grants to total number of patents.
Efficient storage of a sparse neural network
A recently filed patent (Publication Number: US20230196061A1) describes a method for efficiently storing a sparse neural network. The method involves obtaining a sparse neural network consisting of a plurality of weights, where each weight represents a unique connection between a pair of artificial neurons in different layers. Only non-zero weights that represent connections between pairs of neurons are stored, while zero weights that represent no connections are not stored. The weights are stored with an association to a unique index that identifies the pair of neurons connected by the weight.
The method also includes storing a triplet of values for each weight, which includes the index values identifying the first and second neurons of the pair, as well as the value of the weight. This allows for efficient retrieval and storage of the weights.
To optimize memory usage, the method involves fetching weights from non-sequential locations in the main memory and storing them in sequential locations in a cache memory. This ensures that weights with non-consecutive indices are consecutively stored, while indices associated with zero weights are skipped.
The patent also describes various data representations that can be used to store the weights of the sparse neural network, including compressed sparse row (CSR) representation, compressed sparse column (CSC) representation, sparse tensor representation, map representation, list representation, and sparse vector representation.
The method can be applied to transform a dense neural network into a sparse neural network by pruning weights. Pruning can be performed during or after the training phase of the neural network, and various pruning techniques can be used, such as L1 regularization, Lp regularization, thresholding, random zero-ing, and bias-based pruning.
The patent also covers the efficient storage of a sparse convolutional neural network. Similar to the sparse neural network, the sparse convolutional neural network consists of neuron channels and convolutional filters. Only non-zero weights that represent connections between channels are stored, while filters with all zero weights are not stored. The method includes storing a triplet of information for each convolutional filter, which includes the index values identifying the input and output channels of the pair, as well as the weights of the filter.
Overall, this patent presents methods and systems for efficiently storing sparse neural networks and sparse convolutional neural networks, allowing for optimized memory usage and improved performance in neural network applications.
To know more about GlobalData’s detailed insights on Nano Dimension, buy the report here.