analitics

Pages

Showing posts with label torchvision. Show all posts
Showing posts with label torchvision. Show all posts

Saturday, December 22, 2018

Using pytorch - the final of story.

Let's continue our story with the child and the gift.
The child saw the gift and his first thought was the desire to know.
The basic forming unit of a neural network is a perceptron.
He saw that he was not too big and his eyes lit up.
To compute the output will multiply input with respective weights and compare with a threshold value.
Each perceptron also has a bias which can be thought of as how much flexible the perceptron is.
This process is of evolving a perceptron to what a now called an artificial neuron.
The next step is the artificial network and is all artificial neuron and edges between.
He touched him in the corners and put his hand on his surface.
The activation function is mostly used to make a non-linear transformation which allows us to fit nonlinear hypotheses or to estimate the complex functions.
He began to understand that he had a special and complex form.
This artificial network is built from start to end from:
  • Input Layer an X as an input matrix;
  • Hidden Layers a matrix dot product of input and weights assigned to edges between the input and hidden layer, then add biases of the hidden layer neurons to respective inputs and use this to update all weights at the output and hidden layer to use update biases at the output and hidden layer.
  • Output Layer an y as an output matrix;
Without too much thoughts he began to break out of the gift in the order in which he touched it.
This weight and bias of the updating process are known as back propagation.
To computed the output and this process is known as forward propagation.
Several moves were enough to complete the opening of the gift.
He looked and understood that the size of the gift is smaller, but the gift was thankful to him.
This forward and back propagation iteration is known as one training iteration named epoch.
The next example I created from an old example I saw on the internet and is the most simple way to show you the steps from this last part of the story:
##use an neural network in pytorch
import torch

#an input array
X = torch.Tensor([[1,0,1],[0,1,1],[0,1,0]])

#the output
y = torch.Tensor([[1],[1],[0]])

#the Sigmoid Function
def sigmoid (x):
  return 1/(1 + torch.exp(-x))

#the derivative of Sigmoid Function
def derivatives_sigmoid(x):
  return x * (1 - x)

#set the variable initialization
epoch=1000 #training iterations is epoch
lr=0.1 #learning rate value
inputlayer_neurons = X.shape[1] #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer

#weight and bias initialization
wh=torch.randn(inputlayer_neurons, hiddenlayer_neurons).type(torch.FloatTensor)
print("weigt = ", wh)
bh=torch.randn(1, hiddenlayer_neurons).type(torch.FloatTensor)
print("bias = ", bh)
wout=torch.randn(hiddenlayer_neurons, output_neurons)
print("wout = ", wout)
bout=torch.randn(1, output_neurons)
print("bout = ", bout)

for i in range(epoch):

  #Forward Propogation
  hidden_layer_input1 = torch.mm(X, wh)
  hidden_layer_input = hidden_layer_input1 + bh
  hidden_layer_activations = sigmoid(hidden_layer_input)
 
  output_layer_input1 = torch.mm(hidden_layer_activations, wout)
  output_layer_input = output_layer_input1 + bout
  output = sigmoid(output_layer_input1)

  #Backpropagation
  E = y-output
  slope_output_layer = derivatives_sigmoid(output)
  slope_hidden_layer = derivatives_sigmoid(hidden_layer_activations)
  d_output = E * slope_output_layer
  Error_at_hidden_layer = torch.mm(d_output, wout.t())
  d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
  wout += torch.mm(hidden_layer_activations.t(), d_output) *lr
  bout += d_output.sum() *lr
  wh += torch.mm(X.t(), d_hiddenlayer) *lr
  bh += d_output.sum() *lr
 
print('actual :\n', y, '\n')
print('predicted :\n', output)
The result is for 100 and 1000 epoch value and show us how close is the actual input (1,1,0) to the predicted results.
See also the weight and bias initialization of the artificial network is created random by torch.randn.
If I added this in my story it would sound like this:
The child's thoughts began to flinch in wanting to finish faster and find the gift.
C:\Python364>python.exe pytorch_test_002.py
weigt =  tensor([[-0.9364,  0.4214,  0.2473],
        [-1.0382,  2.0838, -1.2670],
        [ 1.2821, -0.7776, -1.8969]])
bias =  tensor([[-0.3604, -0.8943,  0.3786]])
wout =  tensor([[-0.5408],
        [ 1.3174],
        [-0.7556]])
bout =  tensor([[-0.4228]])
actual :
 tensor([[1.],
        [1.],
        [0.]])

predicted :
 tensor([[0.5903],
        [0.6910],
        [0.6168]])

C:\Python364>python.exe pytorch_test_002.py
weigt =  tensor([[ 1.2993,  1.5142, -1.6325],
        [ 0.0621, -0.5370,  0.1480],
        [ 1.5673, -0.2273, -0.3698]])
bias =  tensor([[-2.0730, -1.2494,  0.2484]])
wout =  tensor([[ 0.6642],
        [ 1.6692],
        [-0.4087]])
bout =  tensor([[0.3340]])
actual :
 tensor([[1.],
        [1.],
        [0.]])

predicted :
 tensor([[0.9417],
        [0.8510],
        [0.2364]])

Monday, December 17, 2018

Using pytorch - another way.

Yes. I used pytorch and is working well. Is not perfect the GitHub come every day with a full stack of issues.
Let's continue this series with another step: torchvision.
If you take a closer look at that gift, you will see that it comes with a special label that can really help us.
This label is a named torchvision.
The torchvision python module is a package consists of popular datasets, model architectures, and common image transformations for computer vision.
Most operations pass through filters and date already recognized.
  • torchvision.datasets: (MNIST,Fashion-MNIST,EMNIST,COCO,LSUN,ImageFolder,DatasetFolder,Imagenet-12,CIFAR,STL10,SVHN,PhotoTour,SBU,Flickr,VOC)
  • torchvision.models: (Alexnet,VGG,ResNet,SqueezeNet,DenseNet,Inception v3)
  • torchvision.transforms: (Transforms on PIL Image,Transforms on torch.*Tensor,Conversion Transforms,Generic Transforms,Functional Transforms)
  • torchvision.utils
This part of the gift help you to load and prepare dataset but into certain order.
Using this special label, we will be able to use the gift-breaking information.
Let's see the example:
C:\Python364>python.exe
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)]
 on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
>>> import torchvision.transforms as transforms
>>> transform = transforms.Compose(
...     [transforms.ToTensor(),
...      transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
>>> trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=T
rue, transform=transform)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data\ci
far-10-python.tar.gz
You will ask me: How is this special gift label linked?
In this way:
>>> import torch
>>> trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,shuffle=Tru
e, num_workers=2)
Let's take a closer look at the information in the special label.
>>> print(trainset)
Dataset CIFAR10
    Number of datapoints: 50000
    Split: train
    Root Location: ./data
    Transforms (if any): Compose(
                             ToTensor()
                             Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)
)
                         )
    Target Transforms (if any): None
>>> print(dir(trainset))
['__add__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq_
_', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash
__', '__init__', '__init_subclass__', '__le__', '__len__', '__lt__', '__module__
', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__'
, '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_check_integrity'
, 'base_folder', 'download', 'filename', 'root', 'target_transform', 'test_list'
, 'tgz_md5', 'train', 'train_data', 'train_labels', 'train_list', 'transform', '
url']
Let's look more closely at the information that can be used by the gift with the special label.
>>> print(trainloader)

>>> print(dir(trainloader))
['_DataLoader__initialized', '__class__', '__delattr__', '__dict__', '__dir__',
'__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__ha
sh__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__
', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
 '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'bat
ch_sampler', 'batch_size', 'collate_fn', 'dataset', 'drop_last', 'num_workers',
'pin_memory', 'sampler', 'timeout', 'worker_init_fn']
Beware, CIFAR10 is just one of the training databases.
About the CIFAR-10 dataset, that consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class.
There are 50,000 training images (5,000 per class) and 10,000 test images.

Thursday, December 13, 2018

Using pytorch - a simpler perspective.

Suppose this module PyTorch is a data extravagance circuit that allows us to filter information several times, and we can decide each time we decide the final result.
A simpler perspective of how to work with PyTorch can be explained by a simple example.
It's like a Christmas baby (PyTorch) that opens a multi-packed gift until it gets the final product - the desired gift.
The opening operations of the package involve smart moves called: forward and backward passes.
The child's feedback can be called: loss and backpropagate.
In this case, the child will try to remove from his package until he is satisfied and will not be lost (loss and backpropagate functions).
To compute the backward pass for a gradient and every time we backpropagate the gradient from a variable, the gradient is accumulative instead of being reset and replaced (most of networks designs call backward multiple times).
PyTorch comes with many loss functions.
Most of examples code create a mean square error loss function and later backpropagate the gradients based on the loss.
Will you ask me if the gift is shaped? I can tell you that the gift can contain from Verge à Saint-Nicolas (unidimensional) to complex (multidimensional) structures - the most simplistic and worn out is the square one (two-dimensional matrix).
This gift is packed with magic in mathematical functions which allows the child to understand what is in the gift.
But the child is more special. He recognizes forms (matrices, shapes, simple formulas) and this allows him to open parts of the gift.
El poate roti acesta parti din cadou (mm).
The mm is a matrix multiplication.
He can see the corners he can get from the gift.
ReLU stands for "rectified linear unit" and is a type of activation function.
Mathematically, it is defined as y = max(0, x).
He can see which parts of the gift are bigger or smaller so he can understand the gift.
This clamp function clamps all elements in input into the returns[ min, max ] and returns a resulting tensor:
The clamp should only affect gradients for values outside the min and max range.
The pow function power with the exponent.
The clone returns a copy of the self tensor. The copy has the same size and data type as self.
A common example is: clamp(min=0) is exactly ReLU().
PyTorch provides ReLU and its variants through the torch.nn module.
If you run the program to look at the output, you will understand that the child has only five operations left and is already pleased with the way the gift result.
The source code is based on one example from here:
import torch 
dtype = torch.float
device = torch.device("cpu")
batch,input,hidden,output = 2,10,2,5
x = torch.randn(batch,input,device=device,dtype=dtype)
y = torch.randn(hidden,output,device=device,dtype=dtype)
w1 = torch.randn(input,hidden,device=device,dtype=dtype)
w2 = torch.randn(hidden,output,device=device,dtype=dtype)

l_r = 1e-6
for t in range(5):
 h = x.mm(w1)
 h_r = h.clamp(min=0)
 y_p = h_r.mm(w2)
 loss = (y_p - y).pow(2).sum().item()
 print("t=",t,"loss=",loss,"\n")
 g_y_p = 2.0 * (y_p -y)
 g_w2 = h_r.t().mm(g_y_p)
 g_h_r = g_y_p.mm(w2.t())
 g_h = g_h_r.clone()
 g_h[h<0 -="l_r" 0="" g_w1="" g_w2="" n="" print="" w1=",w1," w2=",w2,">
The child's result after five operations.
...
t= 4 loss= 25.40263557434082

w1= tensor([[ 1.5933,  0.3818],
        [-1.0043, -1.3362],
        [ 0.5841, -1.9811],
        [ 2.3483,  0.5748],
        [ 0.5904, -0.2521],
        [-0.6612,  2.7945],
        [ 0.4841, -0.5894],
        [-1.4434, -0.1421],
        [-1.2712, -1.4269],
        [ 0.7929,  0.2040]]) w2= tensor([[ 1.7389,  0.4337,  0.4557,  1.3704,  0
.3819],
        [ 0.2937,  0.0212, -0.4604, -1.0564, -1.5403]])

Wednesday, December 12, 2018

Using pytorch - install it on Windows OS.

A few days ago I install the pytorch on my Windows 8.1 OS and today I will able to install on Fedora 29 distro.
I will try to make a series of pytorch tutorials with Linux and Windows OS on my blogs.
If you want to install it on Fedora 29 you need to follow my Fedora blog post.
For installation on Windows OS, you can read the official webpage.
Because I don't have a new CUDA GPU, the only one video card is an NVIDIA video card 740M on my laptop and my Linux is an Intel onboard video card, I''m not able to solve issues with CUDA and pytorch.
Anyway, this will be a good start to see how to use pytorch.
Let's start the install into default way on Scripts folder from my python version 3.6.4 folder installation.
C:\Python364\Scripts>pip3 install https://download.pytorch.org/whl/cpu/torch-1.0
.0-cp36-cp36m-win_amd64.whl
Collecting torch==1.0.0 from https://download.pytorch.org/whl/cpu/torch-1.0.0-cp
36-cp36m-win_amd64.whl
  Downloading https://download.pytorch.org/whl/cpu/torch-1.0.0-cp36-cp36m-win_am
d64.whl (71.0MB)
    100% |████████████████████████████████| 71.0MB 100kB/s
Installing collected packages: torch
  Found existing installation: torch 0.4.1
    Uninstalling torch-0.4.1:
      Successfully uninstalled torch-0.4.1
Successfully installed torch-1.0.0

C:\Python364\Scripts>pip3 install torchvision
After I install the pytorch python module I import the pytorch and torchvision python modules.
First, as I expected the CUDA feature:
>>> import torch
>>> torch.cuda.is_available()
False
Let's make another test with pytorch:
>>> x = torch.rand(76, 79)
>>> x.size()
torch.Size([76, 79])
>>> print(x)
tensor([[0.1981, 0.3841, 0.9276,  ..., 0.3753, 0.7137, 0.7702],
        [0.8202, 0.9564, 0.5590,  ..., 0.0914, 0.4983, 0.7163],
        [0.0864, 0.4588, 0.0669,  ..., 0.3939, 0.0318, 0.8650],
        ...,
        [0.9028, 0.8431, 0.8592,  ..., 0.3825, 0.2537, 0.7901],
        [0.2055, 0.3003, 0.8085,  ..., 0.0724, 0.9226, 0.9559],
        [0.3671, 0.1178, 0.3837,  ..., 0.7181, 0.5704, 0.9268]])
>>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1., -1.],
        [ 1., -1.]])
>>> torch.zeros([1, 4], dtype=torch.int32)
tensor([[0, 0, 0, 0]], dtype=torch.int32)
>>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=torch.int32)
>>> torch.zeros([3, 4], dtype=torch.int32)
tensor([[0, 0, 0, 0],
        [0, 0, 0, 0],
        [0, 0, 0, 0]], dtype=torch.int32)
You can make many tests and check your instalation.
This is a screenshot with all features show by dir with pytorch and torchvision: