Friday, June 18, 2021

A Baisc Gans Models using Tensorflow.

 Building GANs(Generative adversarial networks) Model using tf


 Let me explain how our model was designed and how neural network layers was working in the way of learning restoration.
 Here we are applying number of neural network layers on the input image and construct them as a block called GANs.
 
 GANs(Generative adversarial networks):

 GANs means Generative adversarial networks are a type of deep neural network used to generate synthetic images. The architecture comprises two deep neural networks, a generator and a discriminator, which work against each other (thus, “adversarial”). The generator generates new data instances, while the discriminator evaluates the data for authenticity and decides whether each instance of data is "real" from the training dataset, or "fake" from the generator.
 Together, the generator and discriminator are trained to work against each other until the generator is able to create realistic synthetic data that the discriminator can no longer determine is fake. After successful training, the data produced by the generator can be used to create new synthetic data, for potential use as input to other deep neural networks.

GANs are versatile in that they can learn to generate new instances of any datatype, such as synthetic images of faces, new songs in a certain style, or text of a specific genre.


image.png

Construction of GANs:

 We have already know that GANs are constructed by submodels called
    1)Generator
    2)Discriminator
  We know that how Generator and Discrimitor works and already discussed above 


  Here we are going to see how Generator and Discrimitor was made up with Artificial Neural Networks and will see how we develop GANs using
  above submodels.
 
 
  Consturcting Generator:
      How basic Generator was actually looks like
     

 image.png
      

how we build:
          
          Taking  input image with a shape (128,128,3)
      here (128,128,3) means images with width 128px , height 128px and with channel 3 named RGB.
     
     First we have had tried to reduce the image size using convolutions layers:
    
       We already have a brief idea about Convolution Neural Networks,
   So let me explain what we have to do after appling CNN.
   
    Applying activation functions to the resulted shape.
       The activation function basically provides a non-linearity to z, which helps in learning complex functions. If we remove all the activation functions, our network will only be learning linear functions and that won’t be of any much help.
       Here we are applying LeakyRelu activation fucntion.
           Leaky Relu is a slight modification of ReLU function.
    It solved the problem we faced with ReLU(negative values as input) by setting 0.01x for x<0.
    
    And finally we got a different shape compare to input shape with reduced (x,y,z) where x<128,y<128,z E R
       
   
     Let we do these above steps until unless the shape becomes (1,1,Z) where Z E R.
    
     These were called as Downsamplings of an input Shape(Input image)
         
   Now really needed process was cameup into Generator called UPsampling used for converting the (1,1,Z)shape into (128,128,3).
   And That shape was called to be Generator image which was Generated by applying multile Upsamplings on  (1,1,Z)shape.
   
   Here we are appling ConvolutionTranspose to convert (1,1,Z) into (p,q,r) where p>1,q>1,r<Z.
   [Transposed Convolution over Interpolation cause Interpolation is like a manual feature engineering and there is nothing that the network can learn about.simple its just opposite to Conv2D].
   
         
        How our generator was looks like :
        
        image.png
   
  
 
  Constructing Discriminator:

 
 
      How a basic discriminator was actually looks like
     
     
      image.png
         
         
      How we build:
     
          The discriminator's training data comes from two sources:

    Real data instances, such as real pictures of people. The discriminator uses these instances as positive examples during training.
    Fake data instances created by the generator. The discriminator uses these instances as negative examples during training.


         
        How our discriminator was looks like :
        
            image.png         

 

How we training our GANs:

 Using an example of creating synthetic images of money, let’s walk
   through the specific parts and functions of a GAN architecture.

 Noise is fed into the generator. Since the generator hasn’t been
     trained yet, the output will look like noise in the beginning.


 Training data and the output of the generator is sent to the discriminator, which is being trained in parallel to identify real/fake images. The output of the discriminator at the beginning will not be very accurate as this portion of the
  network is also being trained and accuracy will improve over
 time.

 Feedback: The output of the discriminator can be fed back to the  generator and the discriminator, which can use this information to  update parameters and attempt to improve on the accuracy.


The goal of the discriminator, when shown an instance from the true  dataset, is to recognize those images that are authentic. Meanwhile, the generator is creating new, synthetic images that it passes to the  discriminator. It does so in the hopes that they, too, will be deemed  authentic, even though they are fake. The goal of the generator is to generate passable images: to lie without being caught. The goal of the discriminator is to identify images coming from the generator as fake.
 
 

No comments:

Post a Comment

Training Decision Forests with TensorFlow.

How to Train Decision Forests with TensorFlow . Desision Forests are are a group of AI calculations with quality and speed cutthroat with (...