AIbase
Product LibraryTool Navigation

RENN

Public

Inspired by Andrej Karpathy’s micrograd, this lecture builds a neural network from scratch, manually deriving gradients, automating backpropagation, and leveraging the TANH activation for nonlinearity. We bridge to PyTorch, demonstrating gradient descent’s power to minimize loss and reveal neural network fundamentals.

Creat2024-11-22T08:43:22
Update2024-11-24T03:00:13
2
Stars
0
Stars Increase

Related projects