2017 AI Grant recipient
in-browser
flaming-fast
gpu-accelerated
deep-learning

TensorFire runs neural networks in the browser using WebGL.

Sign up to get notified when we publish new demos.

We won't spam you or give your email to third parties.
DEMOS
What is TensorFire?

TensorFire is a framework for running neural networks in the browser, accelerated by WebGL.

Applications powered by TensorFire can utilize deep learning in almost any modern web browser with no setup or installation.

TensorFire models run up to 100x faster than previous in-browser neural network libraries, at speeds comparable to highly-optimized native CPU code.

How does it work?

TensorFire has two parts: a low-level language based on GLSL for easily writing massively parallel WebGL shaders that operate on 4D tensors, and a high-level library for importing models trained with Keras or TensorFlow.

It works on any GPU, whether or not it supports CUDA. That means on computers with AMD graphics, like the latest 2016 Retina Macbook Pro, running networks in the browser with TensorFire can be faster than running it natively with TensorFlow.

What can I build with TensorFire?

With TensorFire, you can build applications which leverage the power of deep learning without forcing people to install native apps, without having to pay for expensive compute farms, nor waiting for a server to respond. Rather than bringing the data to the model, you can deliver your model straight to your users, respecting their right to privacy.

We’ve prototyped some demos, but it barely scratches the surface of what’s possible. TensorFire can run complex state-of-the-art networks like ResNet-152, stylize photographs like famous paintings, generate text with a character-by-character recurrent model, and classify objects with your browser’s webcam in real time with SqueezeNet.

The low-level API can also be used to do arbitrary parallel general purpose computation. We’ve used it to multiply matrices, solve linear systems of equations, and to compute PageRank, simulate cellular automata, transform and filter images, and more.

What makes it fast?

Modern desktops, laptops, and phones contain powerful GPUs optimized for highly-parallel computation.

By transforming neural network weights into WebGL textures and implementing common layers as fragment shaders, we can use the graphics capabilities of browsers designed for 3D games to speed up the execution of neural networks.

Unlike other WebGL compute frameworks, we support low-precision quantized tensors. This allows us to support browsers that don’t fully support the OES_texture_float extension, and run even faster with even smaller models.

How do I get started?

Sign up for updates! We’re still frantically mashing on our keyboards to document our APIs, and you’ll be the first to hear once that’s ready. Like all good things, it’ll be open source, and we’ll be depending on people like you to make cool stuff.

We’re going to be launching more demos over the next couple weeks, and if there’s a bunch of people on that list we’ll feel really bad about disappointing all of you who have read this public promise to do so.

Who makes this?

We’re a group of recent MIT graduates who all think this whole “deep learning” thing is pretty neat.

Kevin Kwok and Guillermo Webster have previously built things combining Javascript and Computer Vision, like Project Naptha— a browser extension that lets you seamlessly highlight, copy/paste, and translate text from within pictures, and Tesseract.js: a fully in-browser OCR library. Anish Athalye and Logan Engstrom have respectively built the first TensorFlow implementations of Gatys’ neural artistic style, and Johnson’s fast style transfer algorithms.

Emboldened by the AI Grant, we’ve spent some time putting together this framework, and we’ve had lots of fun building stuff on top of it. Soon it will be your turn!

Contact us at [email protected].