Google Files AI Patents (original) (raw)

Google has just revealed the fact that it has applied for at least six patents on fundamental neural network and AI. This isn't good for academic research or for the development of AI by companies.

A recent post on the Reddit Machine Learning group brings to light the fact that Google has submitted at least six patents on what you might consider fundamental ideas in neural network use.

synaptic1

The first is a patent on dropout with Geoffrey Hinton and his team from Toronto University Alexander Krizhevsky, Ilya Sutskever and Nitish Srivastva named as inventors. They certainly did invent drop out as a quick check of the academic record will prove. Now dropout is a standard technique used by almost everyone training a neural network as a way to avoid overfitting. To quote the patent:

"A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data."

Basically you make it harder for the network to rote learn the inputs by randomly dropping neurons.

The second neural network patent application is from the members of the same group and claims to patent the idea of a parallel convolutional network. While Hinton's team can claim to have created an improved and easy-to-use GPU-based implementation of convolution networks, there is no sense in which it invented the parallelization of such networks.

The third is again by Hinton's team and attempts to patent the idea of modifying training images by distorting their color space to create additional training images - so increasing the total size of the training set.

The fourth is concerned with using Q learning with a neural network. What Watkins, the inventor of Q learning, and Tesauro, the first person to use reinforcement learning with neural networks, would make of the patent submission we will have to imagine.

The fifth, Classifying Data Objects, is ludicrous and basically claims to patent any method that performs classification. Its very broad wording seems to cover everything from the oldest classical methods, such as linear discriminant analysis, up to the latest, neural network classifiers. Its inventors include Samy Bengio and a list of Google researchers.

The sixth is on word embeddings, which again is a fairly standard technique.

As the anonymous poster on Redit says:

"I am afraid that Google has just started an arms race, which could do significant damage to academic research in machine learning. Now it's likely that other companies using machine learning will rush to patent every research idea that was developed in part by their employees. We have all been in a prisoner's dilemma situation, and Google just defected. Now researchers will guard their ideas much more combatively, given that it's now fair game to patent these ideas, and big money is at stake."

You might make the charitable assumption that Google has just patented the ideas so that it can protect them - i.e. to stop other more evil companies from patenting them and extracting fees from open source implementations of machine learning libraries. For a precedent for this see Google Frees Up More Patents. Even if this is the case, open source libraries are often purged of non-free routines and this makes life significantly harder for researchers.

Google needs to clarify the position but even if it does there is no way to make sure it doesn't exercise its patent rights are a later date. The source of the problem is the whole US patent system with its "first to file is the inventor" policy.

We have to hope that these patents are not granted, but judging by past performance this seems unlikely.

synaptic1

espbook

Comments

or email your comment to: comments@i-programmer.info