Neuton: A new, disruptive neural network framework for AI applications
Deep learning neural networks are behind much of the progress in AI these days. Neuton is a new framework that claims to be much faster and more compact, and it requires less skills and training than anything the AWSs, Googles, and Facebooks of the world have.
This does not happen often. Deep learning is the hottest technology today, with countless applications and deep investment from the usual suspects. To have something new released from someone who is not among the GAFAs of the world, and claims to be radically better in every way, is sure to raise some eyebrows.
That was our reaction too, when we were approached by Bell Integrator about Neuton a couple months back. Neuton is a neural network framework, which Bell Integrator claims is far more effective than any other framework and non-neural algorithm available on the market.
Besides being faster, according to released benchmarks, Bell Integrator says Neuton is an Auto ML solution with resulting models that are self-growing and learning. And, to top that off, says Bell Integrator, Neuton is so easy to use that no special AI background is needed.
Machine learning October fest
At the time, it was much easier to disbelieve all that, as there was not much to show for, besides some impressive benchmarks and a release date set for November. Today, adding to the machine learning October fest, and potentially triggered by it, Neuton is officially released.
Now, let's try and shed some light on where Neuton is coming from, how does it work, and what does it all mean. Let's start with the vendor behind this, which is a private global consulting and technology services provider that has been around since 2003.
Bell Integrator has over 2,500 employees in 10 locations and lists names such as Ericsson, Cisco, Century Link, Juniper, Citibank, Deutsche Bank, and Societe Generale as its clients. When we asked about the team behind Neuton, we did not get any names you might recognize.
Blair Newman, Bell Integrator CTO, said this was delivered by a team of scientists accumulating more than 700 years combined experience as scientific researchers while successfully solving complex algorithmic problems in augmented reality, artificial intelligence, neural networks, machine learning, video analytics, internet of things, and blockchain.
We can only speculate as to how Neuton came to be. Its features, however, seem quite impressive, almost too good to be true. If nothing else, Neuton has not been on the radars even of people who live and breathe machine learning. When we asked Soumith Chintala, the Facebook researcher who is leading PyTorch for a comment on Neuton, his reply was that he was not aware of it, even though he monitors the field closely.
Neuton, says Bell Integrator, is self-growing and learning: There is no need to work on layers and neurons; just prepare a dataset, and a model will be grown automatically. And the model also needs fewer training samples.
Besides benchmarks, now you can download those models and see for yourself, says Bell Integrator. The models are said to be 10- to 100-times smaller and faster than those built with existing frameworks and non-neural algorithms, while also using 10- to 100-times less in neurons.
No overfitting, far fewer absolute and relative errors in the validation samples, higher accuracy, and Auto ML -- no need for a scientific or AI background. Only basic technical skills required. And, if you are not puzzled enough yet, here's one more thing: In Neuton's FAQ, it is mentioned that the first release will be without CNN/RNN (convolutional/recurrent neural networks).
To neural network or not to neural network?
What does this all mean? Is Neuton a neural network, or not? Does it require training, or not? Newman said that although CNN/RNN support is not included in the initial release, Neuton is a neural network that effectively solves Regression and Classification problems. It does need training, although training samples are smaller in comparison to other algorithms.
The resulting models are delivered in the hdf5 format, which is open source and can be used by the majority of modern programming languages and frameworks, including Keras. Hdf5 is supported by Python, Java, R, etc. There is also support for a ready to use REST API service, and for GPUs.
"Neuton is an independent method of machine learning, it is our proprietary development. Neuton's workflow is very easy and consists of a few steps.
In the first step a user uploads their data. In the second step, they specify which data to use for training and which to use for validation. At the third step, they select a metric for their task and criteria for training to stop. After the training is complete, we enable the user to make sure in its accuracy, forecasting the results on unknown data. At the final step, the user can choose how to use the model.
We provide the option of downloading the model or hosting it in the cloud. For large enterprise clients who do not feel secure in uploading their data to a public cloud we roll-out the model on premises.
Neuton's model can be used either as a standalone solution or to build an ensemble of various algorithms. Models based on Neuton can automatically be rolled out as a REST-API service in one click. They can also be downloaded with a code sample for local use in Python."
How is this possible? And what's with these benchmarks? Are the specifications of the benchmarks available? Can third parties reproduce the results? Do they include training and inference? Why are some results measured for Neuton single models, and others for ensemble too?
"Thanks to our proprietary algorithm and disruptive machine learning technology, models built on Neuton are super compact, meaning that they consist of relatively few neurons and coefficients. The actual algorithm is our IP, therefore, we cannot disclose it. Neuton results were compared against Caffe2, Tensor Flow+Keras, CNTK, Torch, Theano. Those networks showed very similar results.
Newman went on to clarify that, to save screen space, on the benchmark tables they show Keras with TensorFlow backend as well as non-neural methods, such as all popular algorithms xgboost, catboost, linear/logistic regression, random forest, etc.
"The results are also reproducible by third parties, and the trained models together with datasets and TensorFlow configuration used can be downloaded from the website for offline use. We have demonstrated Neuton's future releases features.
We conducted a few experiments that prove that using Neuton's models in ensemble dramatically improve results of the single model. We used these results in comparison with some traditional algorithms that are ensembles themselves (xgb, random forest, etc)."
To Neuton or not to Neuton?
Technically, we can't say whether Neuton does Deep Learning or not, since we do not know its inner architecture. But that does not change the fact that all this sounds impressive. Performance, however, is not everything. How does Neuton stack up against the tried-and-true champions and the latest and greatest PyTorch and fast.ai?
"Unlike Neuton, PyTorch and Fast.ai require some coding and the knowledge of neural network architectures, which means that our target audience is much wider and model setup time shorter, regardless of level of expertise.
We also offer our users all necessary infrastructure elements including storage for user data and models, virtual machines with GPU for training, virtual machines for rolling out in the cloud, meanwhile simultaneously empowering enterprise customers to use Neuton on their premise where desired.
From the performance and effectiveness perspective the new libraries mentioned above are still the same and do not affect our benchmarks."
"Neuton makes AI available to everyone and augments human ingenuity, which will have a transformative impact on economy, every industry, scientific breakthroughs, and the quality of our and future generations everyday life through wider usage and adoption of artificial intelligence.
We believe that intelligence makes the world a better place."
Newman went on to provide some background as to Neuton's naming. Neuton is a wordplay on neural networks, and Sir Isaac Newton, who believed that intelligence makes the world a better place.
Clearly, some experimentation is needed, and Bell Integrator is hoping it will attract enough attention to at least tempt people to give Neuton a try. But supposing all of this works as promised, what next?
Bell Integrator's business model for Neuton is not clear to us, as we did not get an explicit reply to our request for comment. Neuton is proprietary, that much is certain, but how much it's going to cost to use this, and on what terms, we do not know at this point.
Of course, Neuton will also need storage and compute resources to work, even if it is much less of it. So, from an economics point of view, it comes down to doing math on a per case basis: Will the cost of using Neuton be justified by the reduced cost in resources required to use it?
We will also have to see how Neuton pans out from a usability and support perspective. On paper, API and language support seem fine, although we'll have to wait to it see it in practice, especially with regards to the "no experience necessary" claim. Plus, a consulting company like Bell Integrator may not be used to, or ready for, dealing with massive requests for support. Some of Neuton's operational modes are not fully functional at this point, too.
In any case, if need for speed is your No. 1 requirement, Neuton is definitely worth a look. It will be interesting to see how this will influence progress in machine learning.