Saturday, June 19, 2021

Training Decision Forests with TensorFlow.

How to Train Decision Forests with TensorFlow.

Desision Forests are are a group of AI calculations with quality and speed cutthroat with (and frequently good for) neural organizations, particularly when you're working with even information. They're worked from numerous choice trees, which makes them simple to utilize and comprehend - and you can exploit a plenty of interpretability apparatuses and procedures that as of now exist today.


So in this Tutorial , we will show how easy it is to train a model with TensorFlow Decision Forests.

Reference:

https://github.com/tensorflow/decision-forests

Step:1

import tensorflow_decision_forests as tfdf 
import pandas as pd                       

why we have to import pandas here is

pandas is a ultimate tool for proprocessing text files and csv files and xlsx files.

we need to install .

pip3 install tensorflow_decision_forests --upgrade

Because Tensorflow update are newly came ,so we need to install above package for Decision Forest training.

Step:2

# Load the dataset in a Pandas dataframe.

train_df = pd.read_csv("project/train.csv")
 test_df = pd.read_csv("project/test.csv")

we have to use to pandas for reading csv files.   Now load these data with tf_df   which was newly install or new update in tensorflow decision forest
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_df, label="my_label")
 test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_df, label="my_label")

# Train the model
model = tfdf.keras.RandomForestModel()
model.fit(train_ds)

 We can also view summary of model,by means what you have done before with that model.

# Look at the model.

model.summary()

# Evaluate the model.
model.evaluate(test_ds)

# Export to a TensorFlow SavedModel.
# Note: the model is compatible with Yggdrasil Decision Forests.
model.save("project/model") 

TensorFlow Decision Forests

 TensorFlow Decision Forests 

https://upload.wikimedia.org/wikipedia/commons/7/76/Random_forest_diagram_complete.png

Reference: https://en.wikipedia.org/wiki/Random_forest

Random forests or random decision forests are an outfit learning strategy for grouping, relapse and different undertakings that works by developing a huge number of choice trees at preparing time. For grouping assignments, the yield of the irregular backwoods is the class chosen by most trees. For relapse undertakings, the mean or normal forecast of the individual trees is returned. Arbitrary choice timberlands right for choice trees' propensity for overfitting to their preparation set. Arbitrary timberlands for the most part beat choice trees, however their exactness is lower than slope helped trees. In any case, information attributes can influence their presentation.

The primary calculation for arbitrary choice timberlands was made in 1995 by Tin Kam Ho utilizing the irregular subspace strategy, which, in Ho's definition, is an approach to carry out the "stochastic segregation" way to deal with order proposed by Eugene Kleinberg.

An augmentation of the calculation was created by Leo Breiman and Adele Cutler, who enrolled "Arbitrary Forests" as a brand name in 2006 (starting at 2019, possessed by Minitab, Inc.).The expansion consolidates Breiman's "packing" thought and irregular choice of highlights, presented first by Ho and later autonomously by Amit and Geman to develop an assortment of choice trees with controlled difference.

Arbitrary backwoods are as often as possible utilized as "blackbox" models in organizations, as they produce sensible expectations across a wide scope of information while requiring little setup.

Now , what we have to see is literally shocking why because ,

First time Tensorflow team introducing Desicion Forest in Tensorflow2.x

 

Team Tensorflow says that they were very have to to introduce this feature in Tensorflow2.x. And Declare it as open source TensorFlow Decision Forests (TF-DF).

And said TF-DF is an assortment of creation prepared best in class calculations for preparing, serving and deciphering choice woods models (counting irregular woodlands and inclination supported trees). You would now be able to utilize these models for arrangement, relapse and positioning undertakings - with the adaptability and composability of the TensorFlow and Keras.

 

In case you're as of now utilizing choice backwoods outside of TensorFlow, here's a tad bit of what TF-DF offers:

---->It's anything but a large number of best in class Decision Forest preparing and serving calculations like arbitrary backwoods, inclination supported trees, CART, (Lambda)MART, DART, Extra Trees, voracious worldwide development, sideways trees, one-side-examining, downright set learning, irregular all out learning, out-of-sack assessment and highlight significance, and underlying component significance.

 ---->This library can fill in as an extension to the rich TensorFlow biological system by making it simpler for you to incorporate tree-based models with different TensorFlow instruments, libraries, and stages like Tensorflow.X.X.

 

Thank you.!

will meet you again with How to Train a Decision Forest with TF .

Friday, June 18, 2021

AutoEncoders Using tensorflow.

 AutoEncoders [Method of Downsampling and Upsampling of image for restoration]

 Presentation:

            Over the most recent couple of years, profound learning based generative models have acquired and more premium due to (and suggesting) some astonishing upgrades in the field. Depending on tremendous measure of information, all around planned organizations designs and shrewd preparing procedures, profound generative models have shown an unbelievable capacity to deliver exceptionally reasonable bits of substance of different kind, like pictures, messages and sounds. Among these profound generative models, two significant families stick out and merit an exceptional consideration: Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).

A AutoEncoder can be characterized just like an autoencoder whose preparation is regularized to stay away from overfitting and guarantee that the dormant space has great properties that empower generative interaction.

AutoEncoders:

                   The overall thought of autoencoders is really straightforward and comprises in setting an encoder and a decoder as neural organizations and to get  familiar with the best encoding-deciphering plan utilizing an iterative enhancement measure. In this way, at every cycle we feed the autoencoder  design (the encoder followed by the decoder) with some information, we contrast the encoded-decoded yield and the underlying information and backpropagate the mistake through the engineering to refresh the loads of the organizations.

            Along these lines, naturally, the generally autoencoder engineering     (encoder+decoder) makes a bottleneck for information that guarantees just the fundamental organized piece of the data can go through and be reproduced. Taking a gander at our overall system, the family E of considered encoders is characterized by the encoder network engineering, the family D of considered decoders is characterized by the decoder network design and the pursuit of encoder and decoder that limit the remaking blunder is finished by slope plunge over the boundaries of these organizations.

 

image.png 


Presently, how about we accept that both the encoder and the decoder are profound and non-direct. In such case, the more intricate the engineering is, the more the autoencoder can continue to a high dimensionality decrease while keeping remaking misfortune low.

 Instinctively, if our encoder and our decoder have enough levels of opportunity, we can decrease any underlying dimensionality to 1.

Here, we ought to anyway remember two things. Initial, a significant dimensionality decrease with no reproduction misfortune frequently accompanies a value: the absence of interpretable and exploitable designs in the inert space (absence of routineness). 

Second, more often than not the last reason for dimensionality decrease isn't to just diminish the quantity of measurements of the information however to lessen this number of measurements while keeping the significant piece of the information structure data in the diminished portrayals. 

For these two reasons, the component of the idle space and the "profundity" of autoencoders (that characterize degree and nature of pressure) must be painstakingly controlled and changed relying upon the last motivation behind the dimensionality decrease.

Limitations of Autoencoders:

Now, a characteristic inquiry that comes as a top priority is "what is the connection among autoencoders and substance age?". In reality, once the autoencoder has been prepared, we have both an encoder and a decoder yet at the same time no genuine method to deliver any new substance.

From the start sight, we could be enticed to imagine that, if the dormant space is adequately ordinary (well "coordinated" by the encoder during the preparation cycle), we could take a point haphazardly from that inert space and decipher it's anything but another substance.

A Baisc Gans Models using Tensorflow.

 Building GANs(Generative adversarial networks) Model using tf


 Let me explain how our model was designed and how neural network layers was working in the way of learning restoration.
 Here we are applying number of neural network layers on the input image and construct them as a block called GANs.
 
 GANs(Generative adversarial networks):

 GANs means Generative adversarial networks are a type of deep neural network used to generate synthetic images. The architecture comprises two deep neural networks, a generator and a discriminator, which work against each other (thus, “adversarial”). The generator generates new data instances, while the discriminator evaluates the data for authenticity and decides whether each instance of data is "real" from the training dataset, or "fake" from the generator.
 Together, the generator and discriminator are trained to work against each other until the generator is able to create realistic synthetic data that the discriminator can no longer determine is fake. After successful training, the data produced by the generator can be used to create new synthetic data, for potential use as input to other deep neural networks.

GANs are versatile in that they can learn to generate new instances of any datatype, such as synthetic images of faces, new songs in a certain style, or text of a specific genre.


image.png

Construction of GANs:

 We have already know that GANs are constructed by submodels called
    1)Generator
    2)Discriminator
  We know that how Generator and Discrimitor works and already discussed above 


  Here we are going to see how Generator and Discrimitor was made up with Artificial Neural Networks and will see how we develop GANs using
  above submodels.
 
 
  Consturcting Generator:
      How basic Generator was actually looks like
     

 image.png
      

how we build:
          
          Taking  input image with a shape (128,128,3)
      here (128,128,3) means images with width 128px , height 128px and with channel 3 named RGB.
     
     First we have had tried to reduce the image size using convolutions layers:
    
       We already have a brief idea about Convolution Neural Networks,
   So let me explain what we have to do after appling CNN.
   
    Applying activation functions to the resulted shape.
       The activation function basically provides a non-linearity to z, which helps in learning complex functions. If we remove all the activation functions, our network will only be learning linear functions and that won’t be of any much help.
       Here we are applying LeakyRelu activation fucntion.
           Leaky Relu is a slight modification of ReLU function.
    It solved the problem we faced with ReLU(negative values as input) by setting 0.01x for x<0.
    
    And finally we got a different shape compare to input shape with reduced (x,y,z) where x<128,y<128,z E R
       
   
     Let we do these above steps until unless the shape becomes (1,1,Z) where Z E R.
    
     These were called as Downsamplings of an input Shape(Input image)
         
   Now really needed process was cameup into Generator called UPsampling used for converting the (1,1,Z)shape into (128,128,3).
   And That shape was called to be Generator image which was Generated by applying multile Upsamplings on  (1,1,Z)shape.
   
   Here we are appling ConvolutionTranspose to convert (1,1,Z) into (p,q,r) where p>1,q>1,r<Z.
   [Transposed Convolution over Interpolation cause Interpolation is like a manual feature engineering and there is nothing that the network can learn about.simple its just opposite to Conv2D].
   
         
        How our generator was looks like :
        
        image.png
   
  
 
  Constructing Discriminator:

 
 
      How a basic discriminator was actually looks like
     
     
      image.png
         
         
      How we build:
     
          The discriminator's training data comes from two sources:

    Real data instances, such as real pictures of people. The discriminator uses these instances as positive examples during training.
    Fake data instances created by the generator. The discriminator uses these instances as negative examples during training.


         
        How our discriminator was looks like :
        
            image.png         

 

How we training our GANs:

 Using an example of creating synthetic images of money, let’s walk
   through the specific parts and functions of a GAN architecture.

 Noise is fed into the generator. Since the generator hasn’t been
     trained yet, the output will look like noise in the beginning.


 Training data and the output of the generator is sent to the discriminator, which is being trained in parallel to identify real/fake images. The output of the discriminator at the beginning will not be very accurate as this portion of the
  network is also being trained and accuracy will improve over
 time.

 Feedback: The output of the discriminator can be fed back to the  generator and the discriminator, which can use this information to  update parameters and attempt to improve on the accuracy.


The goal of the discriminator, when shown an instance from the true  dataset, is to recognize those images that are authentic. Meanwhile, the generator is creating new, synthetic images that it passes to the  discriminator. It does so in the hopes that they, too, will be deemed  authentic, even though they are fake. The goal of the generator is to generate passable images: to lie without being caught. The goal of the discriminator is to identify images coming from the generator as fake.
 
 

Training Decision Forests with TensorFlow.

How to Train Decision Forests with TensorFlow . Desision Forests are are a group of AI calculations with quality and speed cutthroat with (...