neograd.nn package¶
Submodules¶
neograd.nn.activations module¶
- class neograd.nn.activations.LeakyReLU(leak=0.01)[source]¶
-
LeakyReLU Layer
- class neograd.nn.activations.ReLU[source]¶
-
ReLU Layer
- class neograd.nn.activations.Softmax(axis)[source]¶
-
Softmax Layer
- Parameters
axis (None or int or tuple of int) – Axis along which it should be calculated
- backward(inputs)[source]¶
Sets the grad_fn of the Tensor
Quite a tricky one, first the Jacobian of each of the slices along the specified axis of the result is taken, which is then dotted with the corresponding slice of the upper gradient
- Parameters
inputs (Tensor) – Operand
neograd.nn.checkpoint module¶
- class neograd.nn.checkpoint.Checkpoint(model, dirpath, hash_length=16)[source]¶
Bases:
object
Creates and initializes files for checkpoints
A JSON file checkpoints.json is created which contains the dict which has the tracked values and also the params file name at the time of adding a checkpoint, operates in append mode A new params file is created at each checkpoint and the JSON file is updated
- Parameters
session (str) – Current session that is in use
dirpath (str) – Directory in which checkpoints must be saved
model (Model) – Model to be checkpointed
hash_length (int) – Character length of session identifiers. Defaults to 16
- _init_files(dirpath)[source]¶
Initializes files required for Checkpoint
Creates a new folder at dirpath, if it doesn’t exist A checkpoint.json file is created, if it is empty, then a new session is created if self.session is None, then automatically the last session is initialized as self.session
- Parameters
dirpath (str) – Directory in which checkpoints must be saved
- _save(updated_checkpoint, params_fname_hash)[source]¶
Saves the checkpoint
Saves the checkpoint details onto checkpoints.json and creates a new file with the params of the model
if self.session isn’t already in existing checkpoints, then it creates a new dict and adds the checkpoint there.
- Parameters
updated_checkpoint (Checkpoint) – Checkpoint that is updated
params_fname_hash (str) – Hash that is generated to be the name of filename
- _update(new_checkpoint)[source]¶
Updates the new_checkpoint
Updates the checkpoint by including the datetime of adding new checkpoint Generates the hash that’ll be used as the filename for the params file that’ll be saved
- Parameters
new_checkpoint (Checkpoint) – New Checkpoint to be updated
- Returns
New Checkpoint, hash that is used as fname
- add(**tracked)[source]¶
Adds a new checkpoint
- Parameters
**tracked – All the data that needs to be tracked in checkpoints.json
- Raises
ValueError – If forbidden_attrs (‘datetime’) are used as keys in tracked, because the same key is used by neograd to add key of the same value, which might get overwritten
ValueError – If values in tracked aren’t serializable and don’t belong to builtin classes
- load(params_fname, load_params=True)[source]¶
Retrieves the Checkpoint
Returns the checkpoint based on the params_fname and loads the params onto the model if load_params is True
- Parameters
params_fname (str) – Filename to load params from
load_params (bool) – Whether params should be loaded from the file onto the model
- Returns
Checkpoint desired
- Raises
ValueError – If the current session is not present in checkpoints.py
neograd.nn.layers module¶
- class neograd.nn.layers.Container[source]¶
Bases:
object
Contains many Layers
- Parameters
eval (bool) – Whether the Container is in eval mode
layers (list of Layer/Container) – Layer to be included in the container
- parameters(as_dict=False)[source]¶
Recursively goes through all the layers in the Container and gets the params of each Layer
- Parameters
as_dict (bool) – Whether params need to be returned as a dict, Defaults to False
- Returns
list of Params
- class neograd.nn.layers.Layer[source]¶
Bases:
object
Fundamental building block of the model
- Parameters
eval (bool) – Whether the Container is in eval mode
- parameters(as_dict=False)[source]¶
Returns the parameters in the Layer
If any of the attributes in a Layer is instance of Param, then it is automatically considered as a param for the model
- Parameters
as_dict (bool) – Whether params need to be returned as a dict, Defaults to False
- Returns
list of Params or dict
- class neograd.nn.layers.Param(data, requires_grad=False, requires_broadcasting=True)[source]¶
Bases:
Tensor
Alias for Tensor
Just an alias for Tensor, so that when params are gathered for a Layer, only these are automatically considered for param, while ignoring some helper Tensors which aren’t necessarily param
- Parameters
__frozen (bool) – Whether current Param is frozen or not. This is required because we need to know if it has been frozen before unfreeze is called
neograd.nn.loss module¶
- class neograd.nn.loss.BCE[source]¶
Bases:
Loss
Binary Cross Entropy
- class neograd.nn.loss.CE[source]¶
Bases:
Loss
Cross Entropy
- class neograd.nn.loss.Loss[source]¶
Bases:
object
Base class of all loss functions
- get_num_examples(outputs_shape)[source]¶
Gathers the number of examples
If dimensions of outputs_shape is 0, ie it is a scalar, then num_examples is 1 Else the first dimension value of outputs_shape is taken as num_examples
- Parameters
outputs_shape (tuple of int) – Shape of the outputs
- Returns
Number of examples
- class neograd.nn.loss.SoftmaxCE(axis)[source]¶
-
Implements Softmax activation with CrossEntropyLoss
Purpose of this is to eliminate costly Jacobian calculation involved with vanilla softmax activation. Since Softmax is most commonly used with Cross Entropy loss, if both are combined in one single Operation, then the derivative is a very minimal subtraction between the softmax output and the targets. So many intermediate backward calculations can be prevented with this.
- Parameters
axis (int or tuple of int) – Axis along which to calculate the Softmax Defaults to None
epsilon to prevent log0
neograd.nn.model module¶
- class neograd.nn.model.EvalMode(model, no_track)[source]¶
Bases:
object
ContextManager for handling eval
A ContextManager to run the model in eval mode, ie while testing the model. Use of this is that some layers like Dropout need to be turned off while testing
- class neograd.nn.model.Model[source]¶
Bases:
object
- eval(no_track=True)[source]¶
Invokes EvalMode ContextManager
- Parameters
no_track (bool) – If Tensors shouldn’t be tracked, Defaults to False
- Returns
EvalMode ContextManager
- get_layers()[source]¶
Gathers all the layers in the Model
Accomplishes by going through all its attributes and if their values are instances of Container/Layer it is taken as a layer
- Returns
Dict with attributes as key and their objects as value
- load(fpath)[source]¶
Loads the params from the filepath onto the model
- Parameters
fpath (str) – File path
- parameters(as_dict=False)[source]¶
Gathers the params of the whole Model
Accomplishes this by iterating through all layers and getting their params
- Parameters
as_dict (bool) – Whether to return the params as a dict. Defaults to False
neograd.nn.optim module¶
- class neograd.nn.optim.Adam(params, lr, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]¶
Bases:
Optimizer
- Parameters
iter (int) – The number of iterations that has occurred, used for bias correction
beta1 (float) – Value of beta1
beta2 (float) – Value of beta2
epsilon (float) – Value of epsilon
- class neograd.nn.optim.Momentum(params, lr, beta=0.9)[source]¶
Bases:
Optimizer
Gradient Descent with Momentum
- Parameters
beta (float) – Value of Beta
- class neograd.nn.optim.Optimizer(params, lr)[source]¶
Bases:
object
Base class for all optimizers
- Parameters
params (list of Param) – Params that need to be updated
lr (float) – Learning rate
- zero_grad(all_members=False)[source]¶
Resets the grads of tensors
By default, since after loss.backward, only Tensors in memory are the params, only their gradients are reset since everytime a new graph is dynamically created
However if retain_graph=True in backward, then all the members in the graph, need to be zero_grad-ed to get the correct gradients, to prevent this all_members can be set to True
- Parameters
all_members (bool) – If all the members in the graph should be zero_grad-ed. Defaults to False
neograd.nn.utils module¶
- neograd.nn.utils.get_batches(inputs, targets=None, batch_size=None)[source]¶
Returns batches of inputs and targets
Split the inputs and their corresponding targets into batches for efficient training
- Parameters
- Yields
Batches of inputs and their corresponding targets
- Raises
AssertionError – If first dimensions of inputs and targets don’t match
ValueError – If batch_size is greater than number of examples
ValueError – If batch_size is negative
ValueError – If batch_size is 0