TensorFlow.js

This is like Keras. But is it not Keras, but the same structuer. You have models. You have layers. You tell the loss function. You create tensors. Tensors are the numerical blocks. They are blocks in higher dimentional. They can be passed to GPUs or TPUs and run. When you are building models you are chaninng operation to create an operational graph. You use tf.* functions to put tensorflow operations. The same things you do in python, you can do in Node.js. It works in broswers and that is where it gets interesting. It can download models to the browsers and do classfication there. All interactions to the models are through tensors. But you can easily create them from Arrays and other JS things. You can send in live microphone and webcam things. You can capture photos from the webcam. The biggest trick is to know the tensor shapes and batches and how functions transform them. backpropogation is not that important. For most cases passing the loss function and the correct activation function and final layer are enough. You can get models that are already trained. They you can train them further. Remove layers from those. Add layers on them. You can configure trainable layer by layer. When training you have to look at the validation loss and traniing loss. There are lots of ways to optimize. You can add layers. Add dropouts. Passing data is easy. So long as you are comfortable with data dimensions it is OK. Input data can be in any dimentions. Batch dimention is always there. For text, if you are adding an embedding layer, they a list of numbers is enough. But they all have to be same length. For RNNs and LSTMS also, you pass all the inputs at once. The full sequence. Attention mechanism is interesting. Variational Autoencodoers are cool. Input and output are the same. There is a middle representation. Variational comes in because it stores a mean and standard deviation. It is stoichastic. There is randomness. Randomness is always good for generalizing. GANs are simple to understand. Generators create things first. Then you pass both generated ones and real ones and train descriminators. Training generators is weird but understandable.