blog_dataset / blog_dataset.csv
nepalprabin's picture
Upload blog_dataset.csv
07f437d verified
source,full_text,title
https://nepalprabin.github.io./posts/2020-06-05-paper-explanation-going-deeper-with-convolutions-googlenet.html,"Paper Explanation: Going deeper with Convolutions (GoogLeNet)
Google proposed a deep Convolution Neural Network named inception that achieved top results for classification and detection in ILSVRC 2014.
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) evaluates algorithms for object detection and image classification at large scale. One high level motivation is to allow researchers to compare progress in detection across a wider variety of objects – taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation
http://www.image-net.org/challenges/LSVRC/
“Going deeper with convolutions” is actually inspired by an internet meme: ‘We need to go deeper’
In ILSVRC 2014, GoogLeNet used 12x fewer parameters thanAlexNetused 2 years ago in 2012 competition.
Problems Inception v1 is trying to solve
The important parts in the image can have large variation in size. For instance, image of object can be in various positions and some pictures are zoomed in and other may get zoomed out. Because of such variation in images, choosing the right kernel size for performing convolution operation becomes very difficult. We require a larger kernel to extract information of object that is distributed more in the picture and a smaller kernel is preferred to extract information of image that is distributed less in the picture.
One of the major approach to increase the performance of neural networks in by increasing its size. This includes increasing its depth and also its size. Bigger size of neural networks corresponds to larger number of parameters, that makes network more prone to overfitting, especially when labeled training examples are limited.
Another drawback of increased network size is increased use of computational resources. If more convolution layers are chained then there results in more consumption of computation resources. If these added capacity is used ineffectively, then computation resources get wasted.
Solution
To solve these issues, this paper comes up with the solution to form a ‘wider’ network rather than ‘deeper’ which is called as Inception module.
The ‘naive’ inception module performs convolutions on input from previous layer, with 3 different size of kernels or filters specifically 1x1, 3x3, and 5x5. In addition to this, max pooling is also performed. Outputs are then concatenated and sent to the next inception module.
One problem to the ‘naive’ approach is, even having 5x5 convolutions can lead to require large resource in terms of computations. This problem emerges more once pooling is added.
To make our networks inexpensive computationally, authors applied dimensionality reductions by adding 1x1 convolutions before 3x3 and 5x5 convolutions. Let’s see how these affect the number of parameters in the neural network.
Let’s see what 5x5 convolution would be computationally
Computation for above convolution operation is:
(5²)(192)(32)(28²) = 120,422,400operations
To bring down such a great number of operations, dimensionality reduction can be used. Here, it is done by convolving with 1x1 filters before performing convolution with bigger filters.
5×5 Convolution with Dimensionality Reduction
After dimensionality reduction number of operations for 5x5 convolution becomes:
(1²)(192)(16)(28²) = 2,408,448operations for the1 × 1convolution and,
(5²)(16)(32)(28²) = 10,035,200operations for the5 × 5convolution.
In total there will be2,408,448 + 10,035,200 = 12,443,648operations. There is large amount of reduction in computation.
So, after applying dimensionality reduction, our inception module becomes:
GoogLeNet was built using inception module with dimensionality reduction. GoogLeNet consists of 22 layers deep network (27 with pooling layers included). All the convolutions, including the convolutions inside inception module , uses rectified linear activation.
GoogLeNet incarnation of the Inception architecture.Source: Original Paper
All the convolutions, including those inside the Inception modules, use rectified linear activation. The size of the receptive field in our network is 224x224 taking RGB color channels with mean sub-traction. “#3x3reduce” and “#5x5reduce” stands for the number of 1x1 filters in the reduction layer used before the 3x3 and 5x5 convolutions. One can see the number of 1x1 filters in the pro-jection layer after the built-in max-pooling in the pool proj column. All these reduction/projection layers use rectified linear activation as well
Original Paper
GoogLeNet is 22 layer deep counting only layers with parameters. With such deep network the may arise a problem such as vanishing gradient. To eliminate this, authors introduced auxiliary classifiers that are connected to intermediate layers, and helps the gradient signals to propagate back. These auxiliary classifiers are added on top of the output of Inception (4a) and (4d) modules. The loss from auxiliary classifiers are added during training and discarded during inference.
The exact structure of the extra network on the side, including the auxiliary classifier, is as follows:
- An average pooling layer with 5x5 filter size and stride 3, resulting in an 4x4 512 output for the (4a), and 4x4 528 for the (4d) stage.
- A 1x1 convolution with 128 filters for dimension reduction and rectified linear activation.
- A fully connected layer with 1024 units and rectified linear activation.
- A dropout layer with 70% ratio of dropped outputs
- A linear layer with softmax loss as the classifier (predicting the same 1000 classes as the main classifier, but removed at inference time).
The systematic view of GoogLeNet architecture is shown below:
GoogLeNet architecture
GoogLeNet consists of a total of 9 inception modules namely 3a, 3b, 4a, 4b, 4c, 4d , 4e, 5a and 5b.
GoogLeNet implementation
Having known about inception module and its inclusion in GoogLeNet architecture, we now implement GoogLeNet intensorflow. This implementation of GoogLeNet is inspired from analytics vidyaarticleon inception net.
Importing the required libraries:
from tensorflow.keras.layers import Layer
import tensorflow.keras.backend as K
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dropout, Dense, Input, concatenate, GlobalAveragePooling2D, AveragePooling2D, Flatten
import cv2
import numpy as np
from keras.utils import np_utils
import math
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import LearningRateScheduler
Next we are using cifar10 dataset as our data.
num_classes = 10
def load_cifar_data(img_rows, img_cols):
#Loading training and validation datasets
(X_train, Y_train), (X_valid, Y_valid) = cifar10.load_data()
#Resizing images
X_train = np.array([cv2.resize(img, (img_rows, img_cols)) for img in X_train[:,:,:,:]])
X_valid = np.array([cv2.resize(img, (img_rows, img_cols)) for img in X_valid[:,:,:,:]])
#Transform targets to keras compatible format
Y_train = np_utils.to_categorical(Y_train, num_classes)
Y_valid = np_utils.to_categorical(Y_valid, num_classes)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
#Preprocessing data
X_train = X_train / 255.0
Y_train = X_valid / 255.0
return X_train, Y_train, X_valid, Y_valid
X_train, Y_trian, X_test, y_test = load_cifar_data(224,224)
Next comes our inception module
Inception module contains 1x1 convolutions before 3x3 and 5x5 convolution operations. It takes different number of filters for different convolution operations and concatenate these operations to take into next layer.
def inception_module(x, filters_1x1, filters_3x3_reduce, filters_3x3, filters_5x5_reduce, filters_5x5, filters_pool_proj, name=None):
conv_1x1 = Conv2D(filters_1x1, (1,1), activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(x)
conv_3x3 = Conv2D(filters_3x3_reduce, (1,1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(x)
conv_3x3 = Conv2D(filters_3x3, (3,3), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(conv_3x3)
conv_5x5 = Conv2D(filters_5x5_reduce, (1,1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(x)
conv_5x5 = Conv2D(filters_5x5, (3,3), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(conv_5x5)
pool_proj = MaxPool2D((3,3), strides=(1,1), padding='same')(x)
pool_proj = Conv2D(filters_pool_proj, (1, 1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(pool_proj)
output = concatenate([conv_1x1, conv_3x3, conv_5x5, pool_proj], axis=3, name=name)
return output
import tensorflow
kernel_init = tensorflow.keras.initializers.GlorotUniform()
bias_init = tensorflow.initializers.Constant(value=0.2)
input_layer = Input(shape=(224, 224, 3))
x = Conv2D(64, (7,7), padding='same', strides=(2, 2), activation='relu', name='conv_1_7x7/2', kernel_initializer=kernel_init, bias_initializer=bias_init)(input_layer)
x = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_1_3x3/2')(x)
x = Conv2D(64, (1,1), padding='same', strides=(1, 1), activation='relu', name='conv_2a_3x3/1', kernel_initializer=kernel_init, bias_initializer=bias_init)(x)
x = Conv2D(192, (3,3), padding='same', strides=(1, 1), activation='relu', name='conv_2b_3x3/1', kernel_initializer=kernel_init, bias_initializer=bias_init)(x)
x = MaxPool2D((3,3), padding='same', strides=(2, 2), name='max_pool_2_3x3/2')(x)
x = inception_module(x,
filters_1x1=64,
filters_3x3_reduce=96,
filters_3x3=128,
filters_5x5_reduce=16,
filters_5x5=32,
filters_pool_proj=32,
name='inception_3a')
x = inception_module(x,
filters_1x1=128,
filters_3x3_reduce=128,
filters_3x3=192,
filters_5x5_reduce=32,
filters_5x5=96,
filters_pool_proj=64,
name='inception_3b')
x = MaxPool2D((3,3), strides=(2, 2), padding='same', name='max_pool_3_3x3/2')(x)
x = inception_module(x,
filters_1x1=192,
filters_3x3_reduce=96,
filters_3x3=208,
filters_5x5_reduce=16,
filters_5x5=48,
filters_pool_proj=64,
name='inception_4a')
x1 = AveragePooling2D((5,5), strides=3)(x)
x1 = Conv2D(128, (1,1), padding='same', activation='relu')(x1)
x1 = Flatten()(x1)
x1 = Dense(1024, activation='relu')(x1)
x1 = Dropout(0.4)(x1)
x1 = Dense(10, activation='softmax', name='auxiliary_output_1')(x1)
x = inception_module(x,
filters_1x1=160,
filters_3x3_reduce=112,
filters_3x3=224,
filters_5x5_reduce=24,
filters_5x5=64,
filters_pool_proj=64,
name='inception_4b')
x = inception_module(x,
filters_1x1=128,
filters_3x3_reduce=128,
filters_3x3=256,
filters_5x5_reduce=24,
filters_5x5=64,
filters_pool_proj=64,
name='inception_4c')
x = inception_module(x,
filters_1x1=112,
filters_3x3_reduce=144,
filters_3x3=288,
filters_5x5_reduce=32,
filters_5x5=64,
filters_pool_proj=64,
name='inception_4d')
x2 = AveragePooling2D((5,5), strides=3)(x)
x2 = Conv2D(128, (1,1), padding='same', activation='relu')(x2)
x2 = Flatten()(x2)
x2 = Dense(1024, activation='relu')(x2)
x2 = Dropout(0.4)(x2)
x2 = Dense(10, activation='softmax', name='auxiliary_output_2')(x2)
x = inception_module(x,
filters_1x1=256,
filters_3x3_reduce=160,
filters_3x3=320,
filters_5x5_reduce=32,
filters_5x5=128,
filters_pool_proj=128,
name='inception_4e')
x = MaxPool2D((3,3), strides=(2,2), padding='same', name='max_pool_4_3x3/2')
x = inception_module(x,
filters_1x1=256,
filters_3x3_reduce=160,
filters_3x3=320,
filters_5x5_reduce=32,
filters_5x5=128,
filters_pool_proj=128,
name='inception_5a')
x = inception_module(x,
filters_1x1=384,
filters_3x3_reduce=192,
filters_3x3=384,
filters_5x5_reduce=48,
filters_5x5=128,
filters_pool_proj=128,
name='inception_5b')
x = GlobalAveragePooling2D(name='avg_pool_5_3x3/1')(x)
x = Dropout(0.4)(x)
x = Dense(10, activation='softmax', name='output')(x)
model = Model(input_layer, [x, x1, x2], name='inception_v1')
Getting the summary of the model
model.summary()
epochs = 25
initial_lrate = 0.01
def decay(epoch, steps=100):
initial_lrate = 0.01
drop = 0.96
epochs_drop = 8
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
sgd = SGD(lr=initial_lrate, momentum=0.9, nesterov=False)
lr_sc = LearningRateScheduler(decay, verbose=1)
model.compile(loss=['categorical_crossentropy', 'categorical_crossentropy', 'categorical_crossentropy'], loss_weights=[1, 0.3, 0.3], optimizer=sgd, metrics=['accuracy'])
Using our model to fit the training data
history = model.fit(X_train, [y_train, y_train, y_train], validation_data=(X_test, [y_test, y_test, y_test]), epochs=epochs, batch_size=256, callbacks=[lr_sc])
References
- https://www.analyticsvidhya.com/blog/2018/10/understanding-inception-network-from-scratch/
- Going Deeper with Convolutions
- https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202",Paper Explanation: Going deeper with Convolutions (GoogLeNet)
https://nepalprabin.github.io./posts/2021-07-27-illustrated-vision-transformers.html,"Illustrated Vision Transformers
Introduction
Ever since Transformer was introduced in 2017, there has been a huge success in the field of Natural Language Processing (NLP). Almost all NLP tasks use Transformers and it’s been a huge success. The main reason for the effectiveness of the Transformer was its ability to handle long-term dependencies compared to RNNs and LSTMs. After its success in NLP, there have been various approaches to its usage for Computer Vision tasks. This paperAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scaleby Dosovitskiy et al. proposes using the transformer and has achieved some great results in various Computer Vision tasks.
Vison Transformer (ViT) makes use of an extremely large dataset while training the model. While training on datasets such as ImageNet (paper labels ImageNet as a mid-sized dataset), the accuracies of the model fall below ResNets. This is because the Transformer lack inductive bias such as translation equivariance and locality, thus it does not generalize well when trained on insufficient data.
Overview of Vision Transformer
- Split image into patches
- Provide sequence of linear embeddings of these patches as an input to transformer (flattening the image) Here, image patches are treated as the same way as tokens (as in NLP tasks)
- Add positional embeddings and a learnable embeddingclass(similar to BERT) to each patch embeddings
- Pass these (patch + positional +class] embeddings through Transformer encoder and get the output values for eachclasstokens
- Pass the representation ofclassthrough MLP head and get the final class predictions.
Method
Figure above depicts the overview of Vision Transformer. As shown in the figure, given the image, the image is split into patches. These image patches are flattened and passed to transformer encoder a sequence of tokens. Along with patch embeddings, position embedding is also passed as an input to the transformer encoder. Here position embedding is added along with patch embedding to retain positional information.
How is an image changed into a sequence of vectors to feed into the transformer encoder?
Let’s decode above figure by taking a RGB image of size\(256 * 256 * 3\). The first step is to create patches of size\(16 * 16\)from input image. We can create\(16 * 16 = 256\)total patches. After splitting input images into patches, another step is to lineary place all splitted images. As seen in the figure, first patch is placed on the left most side and right most on the far right. Then, we linearly project these patches to get\(1 * 768\)vector representations. These representation is known as patch embeddings. The size of patch embedding becomes\(256 * 768\)(since we have 256 total patches with each patch represented as\(1 * 768\)vector.
Next, we prepend learnable embeddingclasstoken and position embeddings along with patch embeddings making the size\(257 * 768\). Here, position embeddings are used to retain positional information. After converting images into vector representation, we need to send image in order as transformer doesnot know the order of the patches unlike CNNs. Due to this, we need to manually add some information about the position of the patches.
Components of Vision Transformer
Since Vision Transformer is based on standard transformer architecture, only difference it being used for image tasks rather than for text, components used here is almost the same. Here, we discuss the components used in Vision transformer along with its significance.
Side note: If you want to dive deep into transformer, thenhereby Jay Alammar is a good place to start with.
Patch embeddings
As the name of the paper “An Image is worth\(16 * 16\)words transformers”, the main take away of the paper is the breakdown of images into patches. Given the image:\(x \, \varepsilon \, \mathbb{R}^{(H * W * C)}\)it is reshaped into 2D flattened patches\(x_p \, \varepsilon \, \mathbb{R}^{N*(P^2.C))}\), where, N=\(\frac{H.W}{p^2}\),\((P, P)\)is the resolution of each image patch.
Learnable embeddingclass
A learnable embeding is added to the embeded patches\(z_0^0 = x_{class}\). The state of this embedding class at the output of Transformer encoder\(z_L^0\)serves as the representation\(y\). This classification head is attached to\(z_L^0\)during both pre-training and fine-tuning.
Position Embeddings
Position Embeddings are added to the patch embeddings along withclasstoken which are then fed into the transformer encoder.
Transformer Encoder
The transformer encoder is a standard transformer encoder architecture as presented in original transformerpaper. This encoder takes embedded patches (patch embedding, position embedding andclassembedding). The transformer encoder consists of alternating layers of multuheaded self-attention and MLP blocks. Layer Normalization is used before every block and residual connection is used after every block.
Using hybrid architecture
Previously, image patches were used to form input sequence, another approach to form input sequence can be the feature map of a CNN (Convolution Neural Network). Here, the patches extracted from CNN map is used as patch embedding. From the paper:
As an alternative to raw image patches, the input sequence can be formed from feature maps of a CNN. In this hybrid model, the patch embedding projection E (Eq. 1) is applied to patches extracted from a CNN feature map. As a special case, the patches can have spatial size 1x1, which means that the input sequence is obtained by simply flattening the spatial dimensions of the feature map and projecting to the Transformer dimension.
References
- An image is worth 16 * 16 words: Transformers for image recognition at scale
- ViT Blog - Aman Arora
- The AI Summer",Illustrated Vision Transformers
https://nepalprabin.github.io./posts/2023-05-15-gpt4-summary.html,"Brief overview of GPT-4
Since the release of ChatGPT, there has been significant interest and discussion within the broader AI and natural language processing communities regarding its capabilities. In addition to this, ChatGPT has captured the attention of the internet at large due to its remarkable ability to generate fluent and natural-sounding responses across a wide range of prompts and language tasks. Due to this, it became fastest growing consumer application in the history, just two months after the launch. ChatGPT is fine-tuned from a model in the GPT-3.5 series and can write articles, jokes, poetrys in response to the prompt. Though powerful, there have also been concerns raised about the potential risks associated with it and other large language models (LLMs), particularly with respect to issues such as bias, and misinformation. One of the major concern for LLMs is that it suffers fromhallucination.
A year after releasing ChatGPT, OpenAI released GPT-4 (on 14th March, an improved version of GPT-3.5 model that supports multimodal data. It is capable of processing text and image data to generate textual data. It achieved human level performance on various professional and academic benchmarks. On a simulated bar exam, GPT-4 achieved a score that falls on the top 10% of the exam takes. In contrast, the score achieved by previous model GPT-3.5 fell on bottom 10%. This shows the level of improvement achieved by the latest version of GPT. It is also important to mention that the model was not specifically trained on these exams. A minority of problems were seen by model while training.
Capabilities of GPT-4
Though the report does not provide any details about architecture (including model size), hardware, training compute, dataset construction, or training method, a demo run byGreg Brockman(President and Co-founder, OpenAI) after the release of GPT-4 shows various capabilities of the model.
You can watch the GPT-4 Developer Livestream replay here:
1. Supports longer context
GPT-4 is capable of handling over 25,000 words of text, that enables its usage in situations that require the creation of lengthy content, extended dialogues, or the exploration and analysis of extensive documents.
2. Hand-drawn pencil drawing turned into a fully functional website
GPT-4 is also capable of handling visual input, such as hand-drawn pencil drawings that looks like a mock design, and generating code to create a website. The generated output is mind blowing. Another important aspect is the accuracy by which the model is able to perform OCR task with such messy handwritings.
Fig. Left is the mock design and right is the website created using the code generated from gpt4-model.source
3. GPT-4 can describe the image.
As opposed to text on prompts (on previous GPT version), this model accepts inputs containing both text and images. It lets user specify any language or vision tasks. GPT-4 displays comparable skills on various types of content, such as documents containing both textual and visual elements like photographs, diagrams, or screenshots, as it does when dealing with text-only inputs.
Example prompt demonstrating GPT-4’s visual input capability. The prompt consists of a question about an image with multiple panels which GPT-4 is able to answer.source
4. Human level performance on professional and academic benchmarks
GPT outperforms the previous state-of-the-art models on various standardized exams, such as GRE, SAT, BAR, and APs, along with other research benchmarks like MMLU, HellaSWAG, and TextQA. GPT-4 outperforms the English language performance of GPT 3.5 and existing language models (ChinchillaandPaLM), including low-resource languages such as Latvian, Welsh, and Swahili.
Limitations of GPT-4
- Though there has been a tremendous improvement as compared to previous models, GPT-4 has similar limitations as earlier GPT models. It is not fully reliable and hallucinates.
- Since GPT-4 is trained on the data available till September 2021, it lacks knowledge of the events occured after that time period.
Risks and mitigations
The prompts entered by the users are not always safe. When providing unsafe inputs to the model, it may generate undesirable text like commiting crimes. To mitigate these risks, various approaches like Adversarial Testing, Model Assisted Safety Pipeline are carried out. Using domain experts and their findings, model is improved to refuse request for unsafe inputs like synthesizing dangerous chemicals.
Examples of how unsafe inputs are refused by the model
Conclusion
The recent advancements in GPT-4, have proven to outperform existing language models in a collection of NLP tasks. The improved capabilities of GPT-4 are not limited to the English language, as predictable scaling allows for accurate predictions in many different languages. However, the increased capabilities of GPT-4 also present new risks, which require significant work to understand and improve its safety and alignment. Nevertheless, GPT-4 marks a significant milestone towards the development of broadly useful and safely deployed AI systems.
References:
- GPT-4 Technical Report
- GPT-4 Blog Post
- chat.openai.com
PS
While GPT-4 may have stolen the headlines, it was not the only new technology on display. AnthropicAI unveiledClaude, next gen AI assistant can help with use cases including summarization, search, creative and collaborative writing, Q&A, coding, and more. Meanwhile, Google AI releasedPaLM, an entry point for Google’s large language models with variety of applications. With these three new systems, the future of AI looks brighter than ever before.",Brief overview of GPT-4
https://nepalprabin.github.io./posts/2020-08-23-neural-style-transfer-and-its-working.html,"Neural style transfer and its working
Have you ever used an app called Prisma that styles your image using popular paintings and turns your photo stunning? If that’s the case then, the app you are using is the result of style transfer; a computer vision technique that combines your images with artistic style.
Introduction
Style transfer is a computer vision technique that takes two images: content image and style image, combines them to form a resulting image that style the image based on style image taking contents from the content image.
Here is how it looks like:
The content image you can see above is Van Gogh’s Starry Night Painting and the style image a image from Tubingen university from Germany. Resultant image is shown on the right side that used content of content image and is styled using style image.
Now let’s get into the working of neural style transfer
Neural Style Transfer is based on Deep Neural Network that create images of high perpetual quality. It uses neural network to separate and recombine content and style of images that we feed to obtain the desired result. The original paper uses 19 layer VGG network comprising of 16 convolutional layers, 5 max-pooling layers and 3 fully connected layers.
Fig. A 19 layer VGG networksource
How exactly do we obtain such images?
Our goal here is to apply style over our content image. We are not training any neural network in this case, rather we start from a blank image and optimize the cost function by changing the pixel values of the image. The cost function contains two losses: Content loss and Style loss.
Considering c as the content image and x as the style transferred image, content loss tends to\(0\)when\(x\)and\(c\)are close to each other and increases when these value gets increased.
Given the original image\(vec{p}\)and generated image\(vec{x}\), we can define the loss generated by the content image as:
Content loss takes content weight which is a scalar that gives weighting for the content loss, content_current that gives features of the current image. content_current is the Pytorch tensor having shape (1,\(C_l\),\(H_l\),\(W_l\)), where\(C_l\)is the number of channels in layer\(l\),\(H_l\)and\(W_l\)are width and height.
def content_loss(content_weight, content_current, content_original):
return torch.sum(content_weight * (content_current-content_original)**2)
After computing content loss, we can compute style loss.To compute style loss, we need to first compute Gram matrix\(G\). Gram matrix represents the correlation between responses of each filter. Given a feature map\(F^l\)of shape\((C_l, M_l)\), the Gram matrix is given by:
\[G_{ij}^\ell = \sum_k F^{\ell}_{ik} F^{\ell}_{jk}\]
Gram matrix in python:
def gram_matrix(features, normalize=True):
N, C, H, W = features.size()
features = features.reshape(N, C,-1)
gram_matrix = torch.zeros([N,C,C]).to(features.device).to(features.dtype)
for i in range(N):
gram_matrix[i,:] = torch.mm( features[i,:], features[i,:].t())
if (normalize):
gram_matrix /= float(H*W*C)
return gram_matrix
Now implementing style loss:
def style_loss(feats, style_layers, style_targets, style_weights):
loss = 0
for i, layer in enumerate(style_layers):
gram_feat = gram_matrix(feats[layer])
loss += (style_weights[i] * torch.sum((gram_feat-style_targets[i])**2))
return loss
To increase the smoothness in the image, we can use another term to our loss that penalizes total variation in the pixel values. We can compute the “total variation” as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, w_t
Total variational loss in python
def tv_loss(img, tv_weight):
loss = 0
loss += torch.sum((img[:,:,1:,:]-img[:,:,:-1,:])**2)
loss += torch.sum((img[:,:,:,1:]-img[:,:,:,:-1])**2)
loss *= tv_weight
return los
Combining above snippets together, we can generate resultant image using content and style images. The complete code is available ongithub. Code is the homework solution forDeep Learning for Computer Visiontaught by Justin Johnson.
References
- Image Style Transfer Using Convolutional Neural Networks
- A Neural Algorithm of Artistic Style
- Deep Learning for Computer Vision",Neural style transfer and its working
https://nepalprabin.github.io./posts/2020-08-04-general-adversarial-networks-gans.html,"General Adversarial Networks (GANs)
“General Adversarial Netsis the most interesting idea in the last 10 years in machine learning”. This was the statement from Yann LeCun regarding GANs when Ian Goodfellow and co-authors introduced it in 2014. After its first introduction, many research papers are published with various architectures and its use cases.
So what are General Adversarial Networks? What are its use cases. In this post I will try to explain about GANs, its underlying math, use cases and GAN implementation in keras.
Introduction
As stated by Ian Goodfellow on hispaper, GAN is a framework for estimating generative models via an adversarial process. During this process, two models are trained. One is called generator\(G\)and another model is called as discriminator\(D\). Generator\(G\)generates new examples that are similar to original data. Discriminator model\(D\)classifies whether the data is real or fake. To keep in simple terms, generator is analogous to counterfeiters, whereas discriminator is analogous to police. Counterfeiters tries to produce fake currency and use it, while police try to detect the fake currency. Counterfeiters come up with new ideas and patterns to make the fake money as similar to the original and fool the police. Similarly, police tries to detect the fake money. Similar is the case with GAN. Generative model tries to create fake data samples and fool the discriminator and discriminator classifies whether data is fake or not. This process goes on until data samples generated by generator are indistinguishable from discriminator.
Consider the following notations for different data points and distributions:Generator’s distribution:\(p_g\)Data:\(x\)Input noise variables:\(p_z(z)\)Then,\(G\)is a generator model represented by multilayer perceptron with parameters\(\theta_g\)(parameters of weights and biases).
Similarly,\(D\)is a discriminator model also represented by multilayer perceptron\(D(x;\theta\_d)\). Then,\(D(x)\)represents a probability that data\(x\)came from original distribution rather than\(p_g\).
A known dataset serves as input for the discriminator. Training involves presenting samples from the training dataset until it achieves acceptable accuracy. Generator however trains based on whether it fools the discriminator. The input to generator is the data samples from latent space ( e.g, multivariate normal distribution). Then, the output generated by the generator is evaluated by the discriminator. Both generator and discriminator model goes through backpropagation to reduce the loss. During this step, generator generates better data samples (say images), whereas discriminator becomes good in classifying fake samples coming from the generator. This way, discriminator\(D\)and generator\(G\)play two-player min-max game.
Training procedure for GANs
- Take a random noise vector\(z\)and feed to the generator\(G\)to produce fake examples\(x^*\). Here label y=0 for\((x, y)\)input-output pair.
- Take fake data\(x^*\)and real data\(x\)and feed to the discriminator model alternatively.
- Since, discriminator\(D\)is a multilayer perceptron, it outputs value between 0 and 1. These values indicates the probability that input is real.
- Both generator and discriminator calculates their respective loss and perform backpropagation to reduce the loss.
- Discriminator tries to maximize the probability of assigning correct labels to both original data and data from random samples.
- Similarly, generator tries to minimize the discriminator’s ability to detect correct and fake samples.
These two networks go on competing with each other until they reach Nash equilibrium. Nash equilibrium is a point in a game where neither player can improve their situation by changing their strategy. More on Nash equilibrium can be foundhere. The overview of GAN architecture is shown below:
Fig. GAN
The noise vector z is transformed into x* by a generator model which is then fed into discriminator network. Similarly, data from original sample is also fed into discriminator. The discriminator in result outputs a classification values close to 1 for real data x. While for data x*, discriminator tries to output value 0 indicating that x* is fake.
Derivation of Loss function for GANs
Since, GAN is trained in multilayer perception, its loss can be calculated using cross-entropy loss given as:\(L(y, y\hat{})\)=\([y\log y\hat{} + (1-y) log(1-y\hat{} )]\)The label for the data coming from\(p_{data}(x)\)is\(y = 1\)and\(y\hat{}\)=\(D(x)\).So, the cross-entropy equation becomes:\(L(D(x), 1)\)=\(log(D(x))\)————–(A)
Similarly, for data coming from generator the label is\(y=0\)and\(y\hat{}=D(G(z))\)In this case, our cross entropy equation becomes:\(L(D(G(z)), 0)\)=\((1-0) log(1-D(G(z))\)=\(log(1-D(G(z))\)————–(B)
We know that the objective of discriminator is to correctly classify fake versus real data. To achieve this equations (A) and (B) should be maximized.\(max\){\(log (D(x))\)+\(log(1-D(G(z)))\)}
The role of generator is to fool discriminator so as to predict fake data as real, i.e., to achieve\(D(G(z)) = 1\). So, the objective function for generator is given as:\(min\){\(log (D(x))\)+\(log(1-D(G(z)))\)}
Note:\(log(D(x))\)has nothing to do with generator objective function. It is kept to provide compact representation of generator and discriminator objective function in our equation.
Combining the objective function of both discriminator and generator, we get following equation:\(\min\_{G}\)\(\max\_{D}\){\(log (D(x))\)+\(log(1-D(G(z)))\)}All the above equations are written with respect to a single instance of data point (x). To consider all the instances of x, we need to take expectation of the whole arguments present in the equation which results in the following equation:
Applications of GANs
- Image-to-Image Translation
- Image-to-Text Translation
- To generate realistic photographs
- Photo Inpainting and many more
Implementation of GAN
Since, GANs consists of two models, generator and discriminator model, we need to build two models. Before building models let’s import libraries. The code snippets for GAN implementation is taken fromGANs in Actionbook.
from keras.datasets import mnist
from keras.layers import Dense, Flatten, Reshape
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential
from keras.optimizers import Adam
Since we will use mnist data to train our discriminator, we need 28*28 image size for our generator for generating new images.
img_rows = 28
img_cols = 28
channels = 1
img_shape = (img_rows, img_cols, channels)
z_dim = 100
z_dim is the size of noise vector used as input to generator model
Now, we’ll build a generator model
def build_generator(img_shape, z_dim):
model = Sequential()
model.add(Dense(128, input_dim=z_dim))
model.add(LeakyReLU(alpha=0.01))
model.add(Dense(28*28*1, activation='tanh'))
model.add(Reshape(img_shape))
return model
Similarly, building generator model
def build_discriminator(img_shape):
model = Sequential()
model.add(Flatten(input_shape=img_shape))
model.add(Dense(128))
model.add(LeakyReLU(alpha=0.01))
model.add(Dense(1, activation='sigmoid'))
return model
Now, we will build GAN using generator and discriminator build previously. While using combined model to train generator, we keep the parameters of discriminator model fixed. Also, discriminator is trained as an independently compiled model.
def build_gan(generator, discriminator):
model = Sequential()
model.add(generator)
model.add(discriminator)
return model
discriminator = build_discriminator(img_shape)
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
generator = build_generator(img_shape, z_dim)
discriminator.trainable = False
gan = build_gan(generator, discriminator)
gan.compile(loss='binary_crossentropy', optimizer=Adam())
Now, that we have build our GAN model, we now train our GAN model. MNIST images is taken as real examples and fake image is generated from noise vector z. These are used to train discriminator network while keeping generator’s parameters constant. Similarly, fake images are generated and we used those images to train generator network by keeping discriminator’s parameter constant.
The images produced by generator over the course of training iterations is shown below
During training, random noise is generated and generator gradually learns to imitate the features of training dataset.
This is the output from two layer general adversarial networks. It is gradually imitating the features of MNIST images.
References
- Ian J. Goodfellow et. al.Generative Adversarial Nets.
- https://en.wikipedia.org/wiki/Generative_adversarial_network
- Ahlad Kumar (GAN youtube playlist)",General Adversarial Networks (GANs)
https://nepalprabin.github.io./posts/2025-03-02-huggingface-smolagents-solutions.html,"Huggingface AI Agents Quiz Solutions
I have been diving into AI agents through Huggingface’s AI Agents Course. This course offers a comprehensive understanding of how to build and deploy AI agents using thesmolagentslibrary. In this blog, I’ll share insights from the course (Unit 2) and provide code snippets to illustrate key concepts.
Here is the course link if anyone is interested.AI Agents Course
Create a Basic Code Agent with Web Search Capability
One of the foundational exercises involves creating a CodeAgent equipped with web search capabilities. This agent leverages the DuckDuckGoSearchTool to perform web searches, enabling it to fetch real-time information. Here’s how you can set it up:
# Create a CodeAgent with DuckDuckGo search capability
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
agent = CodeAgent(
tools=[DuckDuckGoSearchTool()], # Add search tool here
model=HfApiModel(""Qwen/Qwen2.5-Coder-32B-Instruct"") # Add model here
)
In this snippet, we initialize a CodeAgent with the DuckDuckGoSearchTool, allowing the agent to perform web searches to answer queries.
Set Up a Multi-Agent System with Manager and Web Search Agents
Multi-Agent systems are the agents that are specialized on complex tasks with more scalable and robust nature. Insmolagents, various agents can be integrated to produce Python code, invoke external tools, conduct web searches, and more. By coordinating these agents, it’s possible to develop robust workflows. A typical multi-agent system includes:
- A manager Agent
- A code interpreter Agent
- A web Search Agent
Multi-agent system allows to separate memories between different sub-tasks and provide great benefits. Firstly, each agent are more focused on its core taks and secondly, separating memories reduces the count of input tokens resulting in reducing latency and cost. Below is the multi-agent system whenweb_agentperforms search andmanager_agentgives data analysis capabilities. Also, we can import dependencies (like python libraries) that helps to perform the tasks.
from smolagents import CodeAgent, ToolCallingAgent, DuckDuckGoSearchTool, HfApiModel, VisitWebpageTool
web_agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
model=HfApiModel(model_id=""Qwen/Qwen2.5-Coder-32B-Instruct""),
max_steps=10,
name=""search"",
description=""Agent to perform web searches and visit webpages.""
)
manager_agent = CodeAgent(
model=HfApiModel(model_id=""Qwen/Qwen2.5-Coder-32B-Instruct""),
managed_agents=[web_agent],
additional_authorized_imports=[""pandas"", ""time"", ""numpy""] # Corrected imports
)
Configure Agent Security Settings
Security is a crucial aspect when deploying AI agents, especially when they execute code. Below code snippet uses E2B to run code in a sandboxed environment. It is a remote execution that run the code in a isolated container.
from smolagents import CodeAgent, HfApiModel
from smolagents.sandbox import E2BSandbox
model = HfApiModel(""Qwen/Qwen2.5-Coder-32B-Instruct"")
agent = CodeAgent(
tools=[],
model=model,
sandbox=E2BSandbox(), # Configure the sandbox
additional_authorized_imports=[""numpy""], # Authorize numpy import
)
Implement a Tool-Calling Agent
Similar toCodeAgent,ToolCallingAgentis another type of agent available in smolagent library. CodeAgent uses Python code snippets whereas ToolCallingAgent use built-in tool-calling capabilities of LLM providers and generate JSON structures.
from smolagents import ToolCallingAgent, HfApiModel, DuckDuckGoSearchTool
agent = ToolCallingAgent(
tools=[DuckDuckGoSearchTool()],
model=HfApiModel(model_id=""Qwen/Qwen2.5-Coder-32B-Instruct""),
name=""SearchAgent"",
description=""An agent that uses DuckDuckGo to search the web."",
max_steps=5,
)
Set Up Model Integration
LLM models are the most important aspect when creating AI agents. There are many model availables for various tasks and domains. So we can easily integrate models that is required for our task. Below code snippet switches between two different models providers.
from smolagents import HfApiModel, LiteLLMModel
# Initialize Hugging Face model
hf_model = HfApiModel(model_id=""Qwen/Qwen2.5-Coder-32B-Instruct"")
# Initialize LiteLLM model as an alternative model
other_model = LiteLLMModel(model_id=""anthropic/claude-3-sonnet"")
# Set the model to hf_model or alternative model
model = hf_model # Alternatively, you can switch this to `other_model`",Huggingface AI Agents Quiz Solutions
https://nepalprabin.github.io./posts/2021-10-25-autocorrect-and-minimum-edit-distance.html,"Autocorrect and Minimum Edit Distance
This is my brief note fromDeepLearning.AI’sNLP Specialization Course.
What is Autocorrect?
Autocorrect is an application that changes misspelled word into a correct word. When writing messages or drafting an email, you may have noticied that if we type any words that is misspelled, then that word automatically gets corrected with correct spelling and based on the context.
How does autocorrect work??
While typing the document we can see we get automatic correction in our document. The basic working of this automatic correction is:
- Identifying a misspelled word
- Find the stringsnedit distance away
- Filter the candidates
- Calculate word probabilities
Now let’s see each of the points in detail.
1. Identifying a misspelled word:
Let’s say we are writing a sentenceThis is a draft docment of the APIs. Here we can see clearly see that the worddocmentis misspelled. But, how do we know that this is a misspelled word? Well, we will have a dictionary containing all correct words and if we do not encounter given string in the dictionary, that string is obviously a misspelled word.
if word not in vocab:
misspelled = True # If the word is not in vocab, we identify it as a misspelled word.
While identifying a misspelled words, we are only looking at the vocab but not the context. Consider a sentenceHello deah. Here,dearis misspelled asdeah. If we writedeerinstead ofdear, then we would not be able to identify misspelled word becausedeeris present in vocab, though it is contextually incorrect.
2. Find strings n edit distance away
Edit is an operation that is performed on a string to change it.
- Types of edit: - Insert (add a letter)‘to’: ‘top’, ‘two’- Delete (remove a letter)‘hat’: ‘ha’, ‘at’, ‘ht’- Switch (swap 2 adjacent letters)‘eta’: ‘eat’, ‘tea’- Replace (change 1 letter to another)‘jaw’: ‘jar’, ‘paw’Using these edits, we can find all possible strings that arenedits away.
3. Filter candidates
After findings strings that are n edit distance away, next step is to filter those strings. After applying edits, the strings are compared with the vocab, and if those strings are not present in vocab, they are discarded. This, way we get a list of actual words.
4. Calculate the word probabilities
The final step is to calculate the word probabilities and find the most likely word from the vocab. Given the sentenceI am learning AI because AI is the new electricity, we find occurrence of each word and calculate the probability. Probability of given wordwcan be calculated as the ratio of the count of wordwto the total size of the corpus. Mathematically:\(P(w) = \frac{C(w)}{V}\),
where:
- \(P(w) - Probability \ of\ a\ word\)
- \(C(w) - Number \ of \ times \ the \ word \ appears\)
- \(V - Total \ size \ of \ the \ corpus\)
Minimum Edit Distance
Till now, we have seen how edit distance works and what are its applications in NLP domain. Now let’s look at theminimum edit distance.
Minimum edit distanceis the minimum number of edits required to transform one string to another. It is used in various applications such as spelling correction, machine translation, DNA sequencing, and many more. To calculate minimum edit distance, we use three types of operations which is also discussed above, i.e., insert, delete and replace.
For example: Consider source worddeerand target worddoor. To change source to target, we need to perform two replace operations i.e., replace each ofetoo. Here number of edits is 2, but in minimum distance distance, there are cost associated with different edit operations.
Using above table, the edit distance for the above problem is: Edit distance =2 x replace cost=2x2=4
This is a brute force method where we are simply looking at the source and target word to calculate the edit distance. If we have large sentence, calculating edit distance with mentioned approach becomes tedious. To make things simpler, we can opt for tabular method using Dynamic Programming approaches.",Autocorrect and Minimum Edit Distance
https://nepalprabin.github.io./posts/2022-10-19-text-summarization-nlp.html,"Text Summarization NLP
Text summarization is one of the Natural Language Processing (NLP) tasks where documents/texts are shortened automatically while holding the same semantic meaning. Summarization process generates short, fluent and accurate summary of the long documents. The main idea of text summarization is to find the subset of the most important information from the entire document and present it in a human readable format. Text summarization has its application in other NLP tasks such as Question Answering (QA), Text Classification, Text Generation and other fields.
Based on how the texts are extracted from the documents, the summarization process can be divided into two types: extractive summarization and abstractive summarization.
Extractive Summarization
- Extractive Summarization
Implementing extractive summarization based on word frequency
We can implement extractive summarization using word frequency in five simple steps:
a. Creating word frequency table
We count the frequency of the words present in the text and create a frequency table which is a dictionary to store the count. While creating the frequency table, we do not account for the stop words present in the text and remove those words.
def frequency_table(text):
# all unique stopwords of english
stop_words = set(stopwords.words(""english""))
words = word_tokenize(text)
freq_table = dict()
# creating frequency table to keep the count of each word
for word in words:
word = word.lower()
if word in stop_words:
continue
if word in freq_table:
freq_table[word] += 1
else:
freq_table[word] = 1
return freq_table
b. Tokenizing the sentences
Here we tokenize the sentences using NLTK’s sent_tokenize() method. This separates paragraphs into individual sentences.
def tokenize_sentence(text):
return sent_tokenize(text)
c. Scoring the sentences using term frequency
Here, we score a sentence by its words, by adding frequency of every word present in the sentence excluding stop words. One downside of this approach is, if the sentence is long, the value of frequency increases.
def term_frequency_score(sentence, freq_table):
# dictionary to keep the score
sentence_value = dict()
for sentence in sentences:
for word, freq in freq_table.items():
if word in sentence.lower():
if sentence in sentence_value:
sentence_value[sentence] += freq
else:
sentence_value[sentence] = freq
return sentence_value
d. Finding the threshold score
After calculating the term frequency, we calculate the threshold score.
def calculate_average_score(sentence_value):
# To compare the sentences within the text, we assign a score.
sum_values = 0
for sentence in sentence_value:
sum_values += sentence_value[sentence]
# Calculating average score of the sentence. This average score can be a good threshold.
average = int(sum_values / len(sentence_value))
return average
e. Generating the summary based on the threshold value
Based on the threshold value, we generate the summary of the text.
def create_summary(sentences, sentence_value, threshold):
# Applying the threshold value and storing sentences in an order into the summary.
summary = ''
for sentence in sentences:
if (sentence in sentence_value) and (sentence_value[sentence] > (1.2 * threshold)):
summary += "" ""+ sentence
return summary
Abstractive Summarization
- Abstractive Summarization
Abstractive Summarization using Transformers
Transformers is an architecture which uses attention mechanisms to solve sequence to sequence problems while solving long term dependencies. Ever since it was introduced in 2017, transformers have been widely used in various NLP tasks such as text generation, question answering, text classification, language translation and so on. The transformer architecture consists of encoder and decoder parts. The encoder component consists of 6 encoders each of which consists of two sub layers: self-attention and feed forward networks. The input text is first converted into vectors using text embedding methods. Then the vector is passed into the self attention layer and the output from the self attention layer is passed through the feed forward network. The decoder also consists of both self attention and feed forward network layer. An additional layer is present in between these components which is an attention layer that helps the decoder to focus on the relevant parts of the input sentence.
Huggingface Transformers provide various pre-trained models to perform NLP tasks. It provides APIs and tools to download and train state-of-the-art pre-trained models. Not only NLP, huggingface supports Computer Vision tasks like image classification, object detection and segmentation, audio classification and recognition, and multimodal tasks like table question answering, optical character recognition, and many more.
Basic transformer pipeline for summarization
Huggingface transformers provide an easy to use model for inference using pipeline. These pipelines are the objects that hide complex code and provide a simple API to perform various tasks.
from transformers import pipeline
classifier = pipeline(""summarization"")
text = """"""Acnesol Gel is an antibiotic that fights bacteria. It is used to treat acne, which appears as spots or pimples on your face, chest or back. This medicine works by attacking the bacteria that cause these pimples.Acnesol Gel is only meant for external use and should be used as advised by your doctor. You should normally wash and dry the affected area before applying a thin layer of the medicine. It should not be applied to broken or damaged skin. Avoid any contact with your eyes, nose, or mouth. Rinse it off with water if you accidentally get it in these areas. It may take several weeks for your symptoms to improve, but you should keep using this medicine regularly. Do not stop using it as soon as your acne starts to get better. Ask your doctor when you should stop treatment. Common side effects like minor itching, burning, or redness of the skin and oily skin may be seen in some people. These are usually temporary and resolve on their own. Consult your doctor if they bother you or do not go away.It is a safe medicine, but you should inform your doctor if you have any problems with your bowels (intestines). Also, inform the doctor if you have ever had bloody diarrhea caused by taking antibiotics or if you are using any other medicines to treat skin conditions. Consult your doctor about using this medicine if you are pregnant or breastfeeding.""""""
classifier(text)
Result:
[{'summary_text': ' Acnesol Gel is an antibiotic that fights bacteria that causes pimples . It is used to treat acne, which appears as spots or pimples on your face, chest or back . The medicine is only meant for external use and should be used as advised by your doctor .'}]
Thepipeline()takes the name of the task to be performed (if we want to perform a question-answering task, then we can simply pass “question-answering” into the pipeline() and it automatically loads the model to perform the specific task.
Fine-tuning summarization model for medical dataset
Summarization using abstractive technique is hard as compared to extractive summarization as we need to generate new text as the output. Different architectures like GTP, T5, BART are used to perform summarization tasks. We will be using the PubMed dataset. It contains datasets of long and structured documents obtained from PubMed OpenAccess repositories. from datasets import load_dataset
pubmed = load_dataset(""ccdv/pubmed-summarization"")
The PubMed dataset contains article, abstract and section_names as columns. The first step after loading the dataset is tokenizing the training data. Tokenization is the process of splitting paragraphs, sentences into smaller units called tokens. tokenizer = AutoTokenizer.from_pretrained(‘facebook/bart-large-cnn’)
The next step is to preprocess the data. Before training the data, we need to convert our data into expected model input format.
def preprocess_function(examples):
inputs = [doc for doc in examples[""article""]]
model_inputs = tokenizer(inputs, max_length=1024, truncation=True)
labels = tokenizer(examples[""abstract""], max_length=128, truncation=True, padding=True)
model_inputs[""labels""] = labels[""input_ids""]
return model_inputs
We need to apply the processing function over the entire dataset. Setting flagbatched=Truehelps to speed up the processing of multiple elements of the dataset at once.
tokenized_pubmed = pubmed.map(preprocess_function, batched=True)
Next, we need to create a batch for all the examples. Huggingface provides a data collator to create a batch for the examples.
tokenized_datasets = tokenized_pubmed.remove_columns(pubmed[""train""].column_names)
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=""facebook/bart-large-cnn"")
Huggingface provides various pre-trained models that we can leverage to perform a variety of machine learning tasks.
model = AutoModelForSeq2SeqLM.from_pretrained(model)
Before training the model, we need to define our training hyperparamaters using training arguments. Since text summarization is a sequence to sequence tasks, we are using Seq2SeqTrainingArguments. And, we need to define our trainer by passing training and test dataset along with training arguments.
# training arguments
training_arguments = Seq2SeqTrainingArguments(
output_dir='./results',
evaluation_strategy='epoch',
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=3,
# remove_unused_columns=False,
# fp16=True,
)
trainer = Seq2SeqTrainer(
model=model,
args=training_arguments,
train_dataset=tokenized_pubmed['train'],
eval_dataset=tokenized_pubmed['validation'],
tokenizer=tokenizer,
data_collator=data_collator
)
The last step is to calltrain()to fine-tune our model.
trainer.train()",Text Summarization NLP
https://nepalprabin.github.io./posts/2021-01-01-deep-residual-learning-for-image-recognition-resnet-paper-explained.html,"Deep Residual Learning for Image Recognition (ResNet paper explained)
Deep Neural Networks tend to provide more accuracy as the number of layers increases. But, as we go more deeper in the network, the accuracy of the network decreases instead of increasing. As more layers are stacked, there occurs a problem ofvanishing gradients. The paper mention that vanishing gradient has been addressed by normalized initialization and intermediate normalization layers. With the increase in depth, the accuracy gets saturated and then degrades rapidly.
*Vanishing gradient: Vanishing gradient is a situation where a deep multilayer feedforward network or RNN is unable to propagate useful gradient information from output end of the model to the layers near the input end of the model. In this case, the gradient becomes very small and prevents weights from changing its value. It causes network hard to train.
The above figure shows that with the increase in depth of the network, training error increases thus increasing test error. Here, the training error on 20 layer network is less than that of 56 layer network. Thus, the network cannot generalize well for new data and becomes an inefficient model. This degradation indicates that increasing the model layer does not aid in the performance of the model and not all the system are easy to optimize.
The paper address the degradation problem by introducing a deep residual learning framework. The main innovation for ResNet is the residual module. Residual module is specifically an identity residual module, which is a block of two convolutional layers with same number of filters and a small filter size. The output of the second layer is added with the input to the first convolution layer.
Residual learning: a building block.source
Network Architecture
The paper took baseline model of VGGNet as a plain network with mostly 3x3 filters with two design rules: a) for the same output feature map size, the layers have the same number of filters and b) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. The network is ended with a global average pooling layer and a 1000-way fully connected layer with a softmax.
Based on the plain network, shortcut connections are added to transform plain version into residual version.
Left: VGG-19 modelMiddle: Plain network with 34 parameter layersRight: residual network with 34 parameter layerssource
Implementation
- Image was first resized with its shorter side sampled into 256 x 480
- Data augmentation techniques was carried out
- Batch normalization was carried out after each convolution and before activation
- Stochastic gradient descent was used for training the network with mini batch of 256.
- Weight decay of 0.0001 and momentum of 0.9 was used.
Experiments
Resnet architecture was evaluated on ImageNet 2012 classification dataset consisting of 1000 classes. The model was trained on the 1.28 million training images and evaluated on the 50k validation images. Moreover, 100k images were used for testing the model accuracy.
While performing experiments on plain networks, a 34-layer plain network showed a higher validation error than an 18-layer plain network. Training error for the 34-layer plain network was found to be higher than the 18-layer plain network. Here, a degradation problem occurred as we go deep into the network. The deep plain networks may have a low convergence rate that impacts the accuracy of the model (impacts in reducing the training error).Different from the plain network, a shortcut connection was added to each pair of 3x3 filters. With a same number of layers as in plain network, Resnet 34 performed better than Resnet 18 network. Resnet-34 showed less error and performs well in generalizing validation data. This resolves the problem of degradation as seen on a plain deep network. The comparison for both plain and residual network is shown below:
Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.source
- Deep Residual Learning for Image Recognition",Deep Residual Learning for Image Recognition (ResNet paper explained)
https://nepalprabin.github.io./posts/2020-09-21-mobilenet-architecture-explained.html,"MobileNet Architecture Explained
In this blog post, I will try to write about the MobileNets and its architecture. MobileNet uses depthwise separable convolutions instead of standard convolution to reduce model size and computation. Hence, it can be used to build light weight deep neural networks for mobile and embedded vision applications.
Topics Covered
- Standard convolutions and depthwise separable convolutions
- MobileNet Architecture
- Width Multiplier to achieve thinner models
- Resolution Multiplier for reduced representation
- Architecture Implementation
Standard convolutions and depthwise separable convolutions
Convolution operation consists of an input image, a kernel or filter that slides through the input image and outputs a feature map. The main aim of convolution operation is to extract features from the input image. As we know, every image can be considered as a matrix of pixel values. Consider an input as 5x5 matrix with values of pixels 0 and 1 as shown below:
Also, consider another 3x3 matrix as below:
The convolution operation for input size 5x5 with filter of 3x3 is shown below:
Fig: The Convolution operation.source
We are sliding the 3x3 matrix over 5x5 input matrix and performing element-wise matrix multiplication and adding the multiplication output to get convolved feature. The output obtained from such operation is also called as feature map. The 3x3 matrix that is sliding over the input matrix is known as filter or kernel. More on convolution can be found at this amazingarticle.
Separable Convolutions
Before knowing what depth-wise separable convolutions do, let’s know about separable convolutions. There are two types of separable convolutions:spatial separable convolutionsanddepthwise separable convolutions.
Spatial separable convolutions
Spatial Separable convolutions deals with spatial dimension of the image (width and height). It divides a kernel into two smaller kernel. For example, a\(3\*3\)kernel is divided into a\(3\*1\)and a\(1\*3\)kernel.
Here, instead of doing one convolution with 9 multiplicants, we can do two convolutions with 3 multiplications each (i.e., 6 in total) to achieve the same effect. With less multiplications, computational complexity goes down and network is able is run faster.
One of the famous convolution used to detect edges i.e., Sobel kernel can also be separated spatially.
Though, less computation power is achieved using spatial separable convolution, all the kernels cannot be separated into two smaller kernels, which is one of the cons of spatial separable convolution.
Depthwise Separable Convolutions
Depthwise Separable Convolutions is what Mobilenet architecture is based on. Depthwise separable convolution works with kernel that cannot be factored into two smaller kernels. Spatial separable convolutions deals with spatial dimensions but depthwise separable convolutions deals with depth dimension also.
Depthwise separable convolution is a factorized convolution that factorizes standard convolution into a depthwise convolution and a\(1*1\)convolution called pointwise convolution. Depthwise separable convolutions splits kernel into two separate kernels for filtering and combining. Depthwise convolution is used for filterning whereas pointwise convolution is used for combining.
Using depthwise separable convolutions, the total computation required for the operation is the sum of depthwise convolution and pointwise convolution which is:
For standard convolution, total computation is:\(D_K . D_K . M . N . D_F . D_F\), where computational cost depends on number of input channels\(M\), number of output channels\(N\), kernel size\(D_K\)and feature map size\(D_F\).
By expressing convolution as a two steps process of filtering and combining, total reduction in computation is:
\(\frac{D_K . D_K . M . D_F . D_F + M . N . D_F . D_F}{D_K . D_K . M. N. D_F . D_F}\), which is equivalent to\(\frac{1}{N} + \frac{1}{D_k^2}\)
That means when\(D_K * D_K\)is 3*3, computation cost can be reduced to 8 to 9 times.More on depthwise separable convolutions can be foundhere.
MobileNet Architecture
- As mentioned above, mobilenet is built on depthwise separable convolutions, except for first layer. First layer is a full convolutional layer.
- All layers are followed by batch normalization and ReLU non-linearity. However, final layer is a fully connected layer without any non-linearity and feeds to the softmax for classification.
- For down sampling, strided convolution is used for both depthwise convolution as well as for first fully convolutional layer.
- The total number of layers for mobilenet is 28 considering depthwise and pointwise convolution as separate layers.
fig. (left) Standard convolution with batchnorm and relu(right) depthwise and pointwise convolution followed by batchnorm and relusource
MobileNet architecture is shown below:
Fig. MobileNet architecture
Width Multiplier to achieve Thinner Models
Though our mobilenet model is smaller and computationally less expensive, sometimes we need our model to be more smaller and less expensive in terms of computation. To construct these models, a separate parameter\(\alpha\)is used called as width multiplier. Width multiplier helps to make network thinner uniformly at each layer. For any given layer and width multiplier\(\alpha\), the number of input channels\(M\)becomes\(\alpha M\)and the number of output channel\(N\)becomes\(\alpha N\). Then, the computational cost for depthwise separable convolution with width multiplier becomes:
\(D_K . D_K . \alpha M . D_F . D_F + \alpha M . \alpha N . D_F . D_F\)
Width multiplier can be applied to any model structure to define a new smaller model with a reasonable accuracy, latency and size trade off. It is used to define a new reduced structure that needs to be trained from scratch.MobileNet paper
Resolution Multiplier for reduced representation
Resolution Multiplier is another parameter for reducing model computational cost. It is represented by\(\rho\). It is applied to the input image and internal representation of every layer is reduced by the same multiplier. The computational cost for depthwise separable convolution with width multiplier becomes:\(D_K . D_K . \alpha M . \rho D\_F . \rho D_F + \alpha M . \alpha N . \rho D_F . \rho D_F\)
The value of\(\rho\)= 1 is the base mobilenet and\(\rho<1\)is the reduced computational MobileNets.
Architecture Implementation
MobileNet uses depthwise separable convolutions where each layers is followed by BatchNormalization and ReLU non-linearity. MobileNet contains a depthwise and a pointwise convolution layer. The code snippets inspired fromMLT.
# First we will build mobilenet block
def mobilenet_block(x, filters, strides):
x = keras.layers.DepthwiseConv2D(kernel_size=3, strides=strides, padding='same')(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding='same')(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.ReLU()(x)
return x
MobileNet uses input_shape of 224*224*3. First layer of mobilenet is a Convolutional layer with 32 filters, 3*3 kernel and stride of 2. This is followed by BatchNormalization and ReLU non-linearity.
INPUT_SHAPE = 28, 28, 3
input = keras.layers.Input(INPUT_SHAPE)
x = keras.layers.Conv2D(filters=32, kernel_size=3, strides=2, padding='same')(input)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.ReLU()(x)
After first layers, there is a series of mobilenet block with different kernel sizes and filters.
x = mobilenet_block(x, filters=64, strides=1)
x = mobilenet_block(x, filters=128, strides=2)
x = mobilenet_block(x, filters=128, strides=1)
x = mobilenet_block(x, filters=256, strides=2)
x = mobilenet_block(x, filters=256, strides=1)
x = mobilenet_block(x, filters=512, strides=2)
for _ in range(5):
x = mobilenet_block(x, filters=512, strides=1)
x = keras.layers.AveragePooling2D(pool_size=7, strides=1)(x)
output = keras.layers.Dense(1000, activation='softmax')(x)
References
- MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- Machine Learning Tokyo
- A Basic Introduction to Separable Convolutions
- An Intuitive Explanation of Convolutional Neural Networks",MobileNet Architecture Explained
https://nepalprabin.github.io./posts/2020-05-09-vggnet-architecture-explained.html,"VGGNet Architecture Explained
VGGNet is a Convolutional Neural Network architecture proposed by Karen Simonyan and Andrew Zisserman of University of Oxford in 2014. This paper mailny focuses in the effect of the convolutional neural network depth on its accuracy. You can find the original paper of VGGNet which is titled asVery Deep Convolutional Networks for Large Scale Image Recognition.
Architecture
The input to VGG based convNet is a 224*224 RGB image. Preprocessing layer takes the RGB image with pixel values in the range of 0 - 255 and subtracts the mean image values which is calculated over the entire ImageNet training set.
Fig. A visualization of the VGG architecture (source)
The input images after preprocessing are passed through these weight layers. The training images are passed through a stack of convolution layers. There are total of 13 convolutional layers and 3 fully connected layers in VGG16 architecture. VGG has smaller filters (3*3) with more depth instead of having large filters. It has ended up having the same effective receptive field as if you only have one 7 x 7 convolutional layers.
Another variation of VGGNet has 19 weight layers consisting of 16 convolutional layers with 3 fully connected layers and same 5 pooling layers. In both variation of VGGNet there consists of two Fully Connected layers with 4096 channels each which is followed by another fully connected layer with 1000 channels to predict 1000 labels. Last fully connected layer uses softmax layer for classification purpose.
Architecture walkthrough:
- The first two layers are convolutional layers with 3x3 filters, and first two layers use 64 filters that results in 224x224x64 volume as same convolutions are used. The filters are always 3x3 with stride of 1
- After this, pooling layer was used with max-pool of 2x2 size and stride 2 which reduces height and width of a volume from 224x224x64 to 112x112x64.
- This is followed by 2 more convolution layers with 128 filters. This results in the new dimension of 112x112x128.
- After pooling layer is used, volume is reduced to 56x56x128.
- Two more convolution layers are added with 256 filters each followed by down sampling layer that reduces the size to 28x28x256.
- Two more stack each with 3 convolution layer is separated by a max-pool layer.
- After the final pooling layer, 7x7x512 volume is flattened into Fully Connected (FC) layer with 4096 channels and softmax output of 1000 classes.
Implementation
Now let’s go ahead and see how we can implement this architecture usingtensorflow. This implementation is inspired from Machine Learning Tokyo’sCNN architectures.
Importing libraries
Starting the convolution blocks with the input layer
input = Input(shape=(224,224,3))
1st block consists of 2 convolution layer each with 64 filters of 3*3 and followed by a max-pool layer with stride 2 and pool-size of 2. All hidden layer uses ReLU for non-linearity.
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(input)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPool2D(pool_size=2, strides=2, padding='same')(x)
2nd block also consists of 2 convolution layer each with 128 filters of 3*3 and followed by a max-pool layer with stride 2 and pool-size of 2.
x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPool2D(pool_size=2, strides=2, padding='same')(x)
3rd block consists of 3 convolution layer each with 256 filters of 3*3 and followed by a max-pool layer with stride 2 and pool-size of 2.
x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=256, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPool2D(pool_size=2, strides=2, padding='same')(x)
4th and 5th block consists of 3 convolutional layers with 512 filters each. In between these blocks, a max-pool layer is used with stride of 2 and pool-size of 2.
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPool2D(pool_size=2, strides=2, padding='same')(x)
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=512, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPool2D(pool_size=2, strides=2, padding='same')(x)
The output from 5th convolution block is Flattened which gives 4096 units. This fully connected layer is connected to another FC layer having same number of units. The final fully connected layer contains 1000 units and softmax activation which is used for classification of 1000 classes
#Dense Layers
x = Flatten(x)
x = Dense(units=4096, activation='relu')(x)
x = Dense(units=4096, activation='relu')(x)
output = Dense(units=1000, activation='softmax')(x)
from tensorflow.keras import Model
model = Model(inputs=input, outputs=output)",VGGNet Architecture Explained
https://nepalprabin.github.io./posts/2020-04-24-alexnet-architecture-explained.html,"AlexNet Architecture Explained
AlexNet famously won the 2012 ImageNet LSVRC-2012 competition by a large margin (15.3% vs 26.2%(second place) error rates). Here is the link to originalpaper.
Major highlights of the paper
- Used ReLU instead of tanh to add non-linearity.
- Used dropout instead of regularization to deal with overfitting.
- Overlap pooling was used to reduce the size of the network.
1. Input
AlexNet solves the problem of image classification with subset of ImageNet dataset with roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images. The input is an image of one of 1000 different classes and output is a vector of 1000 numbers.
The input to AlexNet is an RGB image of size 256*256. This mean that all the images in training set and test images are of size 256*256. If the input image is not 256*256, image is rescaled such that shorter size is of length 256, and cropped out the central 256*256 patch from the resulting image.
source
The image is trained with raw RGB values of pixels. So, if input image is grayscale, it is converted into RGB image . Images of size 257*257 were generated from 256*256 images through random crops and it is feed to the first layer of AlexNet.
2. AlexNet Architecture
AlexNet contains five convolutional layers and three fully connected layers - total of eight layers. AlexNet architecture is shown below:
AlexNet Architecture
For the first two convolutional layers, each convolutional layers is followed by a Overlapping Max Pooling layer. Third, fourth and fifth convolution layers are directly connected with each other. The fifth convolutional layer is followed by Overlapping Max Pooling Layer, which is then connected to fully connected layers. The fully connected layers have 4096 neurons each and the second fully connected layer is feed into a softmax classifier having 1000 classes.
2.1) ReLU Non-Linearity:
The standard way of introducing nonlinearity is using tanh: f(x) = tanh(x) where f is a function of input x or using f(x) = (1+e-x)-1.
These are saturating nonlinearities which are much slow than non-saturating nonlinearity f(x) = max(0, x), in terms of training time with gradient descent.
fig. (Tanh and Relu activation functions)
Saturating nonlinearities:These functions have a compact range, meaning that they compress the neural response into a bounded subset of the real numbers. The LOG compresses inputs to outputs between 0 and 1, the TAN H between -1 and 1. These functions display limiting behavior at the boundaries.
Training network with non-saturating nonlinearity is faster than that of saturating non-linearity.
2.2) Overlapping Pooling:
Max Pooling layers help to down-sample an input representation (image, hidden-layer output matrix, etc.), reducing its dimensionality and allowing for assumptions to be made about features contained in the sub-regions binned. Max Pooling helps to reduce overfitting. Basically, it uses a max operation to pool sets of features, leaving us with a smaller number of them. Max Pooling and Overlapping is same except except the adjacent windows over which the max is computed overlap each other.
“A pooling layer can be thought of as consisting of a grid of pooling units spaced s pixels apart, each summarizing a neighbourhood of size z*z centered at the location of pooling unit. If we set s=z, we obtain traditional local pooling. If we set s < z, we obtain overlapping pooling.
AlexNet Paper (2012)
The overlapping pooling reduces the top-1 and top-5 error rates by 0.4% and 0.3% compared to non-overlapping pooling, thus finding it very difficult to overfit.
2.3) Reducing Overfitting
Various techniques are applied to reduce overlapping
Data Augmentation
The most common way to reduce overfitting on image data is data augmentation. It is a strategy to significantly increase the diversity of data available for training the models without collecting new data. Data augmentation includes techniques such as Position augmentation (cropping, padding, rotating, translation, affine transformation), color augmentation(Brightness, contrast saturation, hue) and many other. AlexNet employ two distinct forms of data augmentation.
The first form of data augmentation is translating the image and horizontal reflections. This is done by extracting random 224*224 patches from 256*256 images and training network on these patches. The second form of data augmentation consists of altering the intensities of RGB channel in training images.
Dropout
Dropout is a regularization technique to reduce overfitting and improving generalization of deep neural networks. ‘Dropout’ refers to dropping out units(hidden and visible) in a neural network. We can interpret dropout as the probability of training  a given node in a layer, where 1.0 means no dropout and 0.5 means 50% of hidden neurons are ignored.
Dropout
References:
- ImageNet Classification with Deep Convolutional Neural Networksby Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, 2012
- https://www.learnopencv.com/understanding-alexnet/",AlexNet Architecture Explained
https://nepalprabin.github.io./posts/2021-03-26-simclr-explained.html,"Paper Explanation: A Simple Framework for Contrastive Learning of Visual Representations (simCLR)
Various self-supervised learning methods have been proposed in recent years for learning image representations. Though a lot of methods have been proposed, the performance of those methods was found less effective in terms of accuracy than those of supervised counterparts. ButSimCLRhas provided promising results, thus taking self-supervised learning to a new level. It uses a contrastive learning approach. This paper introduces a simple framework to learn representations from unlabeled images based on heavy data augmentation. Before going deep into simCLR and its details, let’s see what contrastive learning is:
Contrastive Learning
Contrastive learning is a framework that learns similarities/dissimilarities from data that are organized into similar/dissimilar parts. It can also be considered as learning by comparing. Contrastive learning learns by comparing among different samples. The samples can be performed between positive pairs of ‘similar’ inputs and negative pairs of ‘dissimilar’ inputs.
To illustrate this in another way, you’re told to chose a picture that is similar to the picture on the left i.e, cat (on the image below). You look at the picture and find the image from a bunch of images present(on the right side) that is similar to the cat. This way, you contrast between similar and dissimilar objects. The same is the case with contrastive learning. Using this approach we can train a machine learning model to classify between similar and dissimilar objects.
Source: GoogleAI
Contrastive learning approaches only need to define the similarity distribution in order to sample a positive input\(x^{+} \sim\ p^{+}\)
Contrastive learning approaches only need to define the similarity distribution in order to sample a positive input\(x^{+} \sim\ {p^{+}(.|x)}\), and a data distribution for a negative input\(x^{-} \sim\ p^{-}(.|x)\), with respect to an input sample\(x\). The goal of Contrastive learning is: the representation of similar samples should be mapped close together, while that of dissimilar samples should be further away from embedding space.
Source:Contrastive Representation Learning: A Framework and Review
A Simple Framework for Contrastive Learning of Visual Representations - SimCLR
How does simCLR learn representations?simCLR learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss.
Inorder to learn good contrastive representation learning, simCLR consists of four major components
-Data Augmentation module: Data augmentation is more beneficial for unsupervised contrastive learning than supervised learning. The data augmentation module transforms any given data example into two correlated views of the same example. These examples are denoted as\(\\widetilde{x\_i}\)and\(\\widetilde{x\_j}\), considered as positive pair. The authors mainly applied three augmentations sequentially:random croppingfollowed by resizing to the original size, randomcolor distortions, and randomGaussian blur.
- Encoder: A neural base encoder\(f(.)\)is used that extracts features from augmented data examples. ResNet is used as the architecture to extract those representations. The learned representation is the result of the average pooling layer.
- Projection head: The projection head\(g(.)\)is a MLP with one hidden layer that maps representations from the base encoder network to space where contrastive loss is applied. Here ReLU activation function is used for non-linearity.
- Contrastive loss function: For any given set of\(\\widetilde{x\_k}\)which includes positive example pair\(\\widetilde{x\_i}\)and\(\\widetilde{x\_j}\), contrastive prediction task aims to identify\(\\widetilde{x\_j}\)in {\(\\widetilde{x\_k}\)} (here i and k are not equal) for given\(\\widetilde{x\_i}\)
simCLR FrameworkGoogle AI
Working of simCLR algorithm
First, we generate a batch of N examples and define contrastive prediction tasks on augmented examples. After applying a series of data augmentation techniques random(crop + resize + color distortion + grayscale) on N examples, 2N data points are generated (since we are generating similar pairs in a batch). Each augmented image is passed in a pair through the base encoder to get a representation from the image. Followed by the base encoder, a projection head is used that maps the base encoder to the representation\(z\_i\)and\(z\_j\)as presented in the paper. For each augmented image, we get embedding vectors for it. These embedding vectors are later subjected for calculating loss.
simCLR algorithm
Calculating loss
After getting the representations of the augmented images, the similarity of those images is calculated using cosine similarity. For two augmented image,\(x\_i\)and\(x\_j\), cosine similarity is calculated on projected representations\(z\_i\)and\(z\_j\).
\(s\_{i,j} = \\frac{z\_i^{T}z\_j}{||z\_i||||z\_j||}\), where\(T\)denotes a temperature parameter,\(||z||\)is the norm of the vector
simCLR usesNT-Xent(Normalized temperature-scaled cross entropy loss) for calculating the loss.
Here\(z\_i\)and\(z\_j\)are the output vectors obtained from the projection head
After training simCLR on the contrastive learning task, it can be used for transfer learning. For downstream tasks, representations from encoder are used rather than the representation from projection head. These representations can be used for tasks such as classification, detection.
Results
The proposed simCLR outperformed previous self-supervised and semi-supervised methods on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels.
ImageNet Top-1 accuracy of linear classifiers trainedon representations learned with different self-supervised methods (pre-trained on ImageNet). Gray cross indicates supervisedResNet-50. Our method, SimCLR, is shown in bold
References
- “A Simple Framework for Contrastive Learning of Visual Representations”
- “SimCLR Slides, Google Brain Team”
- Contrastive Representation Learning: A Framework and Review
- The Illustrated SimCLR Framework",Paper Explanation: A Simple Framework for Contrastive Learning of Visual Representations (simCLR)
https://nepalprabin.github.io./posts/2023-07-04-augmented-language-models.html,"Augmenting Large Language Models: Expanding Context and Enhancing Relevance
With the rise of ChatGPT and other large language models (LLMs), the potential for AI to surpass human capabilities has become a topic of both fascination and concern. While LLMs excel at understanding language, following instructions, and reasoning, they often fall short when it comes to performing specific tasks. Simply inputting a prompt into ChatGPT may result in answers that are unrelated or out of context, a phenomenon known as “hallucination.” To obtain relevant information, it is crucial to provide the model with the appropriate context. However, the size of the context window is limited, posing a challenge in capturing all necessary information. Although the context size has increased over time, storing extensive information within a fixed context window remains impractical and expensive. This is where the augmentation of language models comes into play.
Augmenting large language models involves three primary approaches:
- retrieval,
- chains, and
- tools.
These methods aim to enhance the capabilities of LLMs by providing them with additional resources and functionalities.
Retrieval Augmentation:
Retrieval augmentation involves leveraging an external corpus of data for the language model to search through. Traditionally, retrieval algorithms employ queries to rank relevant objects in a collection, which can include images, texts, documents, or other types of data. To enable efficient searching, the documents and their corresponding features are organized within an index. This index maps each feature to the documents containing it, facilitating quick retrieval. Boolean search determines the relevance of documents based on the query, while ranking is typically performed using algorithms like BM25 (Best Match 25).
BM25 (Best Match 25) is a ranking function commonly used in information retrieval to measure the relevance of a document to a given query. It is a probabilistic retrieval model that enhances the vector space model by incorporating document length normalization and term frequency saturation.
In BM25, the indexing process involves tokenizing each document in the collection into terms and calculating term statistics such as document frequency (df) and inverse document frequency (idf). Document frequency represents the number of documents in the collection containing a particular term, while inverse document frequency measures the rarity of the term across the collection.
During the querying phase, the query is tokenized into terms, and term statistics, including query term frequency (qtf) and query term inverse document frequency (qidf), are computed. These statistics capture the occurrence and relevance of terms in the query.
While traditional retrieval methods primarily rely on keyword matching and statistical techniques, modern approaches leverage AI-centric retrieval methods that utilize embeddings. These methods offer improved search capabilities and help retrieve contextually relevant information.
Chains
Chains involve using the output of one language model as the input for another. By cascading multiple models together, the output of each model becomes the input for the subsequent one. This chaining process allows the models to build upon each other’s knowledge and reasoning abilities, potentially leading to more accurate and contextually appropriate responses.
The sequential arrangement of models in a chain creates a pipeline of of interconnected language models, where the output of one model serves as the input for the next. This pipeline allows for a cascading flow of information and reasoning, enabling the models to collectively enhance their understanding and generate more accurate responses. By leveraging a chain of language models, each model can contribute its specialized knowledge and capabilities to the overall task. For example, one model may excel at language comprehension, while another may possess domain-specific knowledge.
As the input passes through the chain, each model can refine and expand upon the information, leading to a more comprehensive and contextually relevant output. The chaining process in language models has the potential to address the limitations of individual models, such as hallucination or generating irrelevant responses. By combining the strengths of multiple models, the pipeline can help mitigate these issues and produce more reliable and accurate results.
Furthermore, the pipeline can be customized and tailored to specific use cases or tasks. Different models can be integrated into the chain based on their strengths and compatibility with the desired objectives. This flexibility allows for the creation of powerful and specialized systems that leverage the collective intelligence of multiple language models.
Langchain
Langchainhas emerged as an immensely popular tool for constructing chains of language models, making it one of the fastest-growing open-source projects in this domain. With support for both Python and JavaScript, it provides a versatile platform for building applications and can be seamlessly integrated into production environments. Langchain serves as the fastest way to kickstart development and offers a wide range of pre-built chains tailored for various tasks. Many developers find inspiration from Langchain and end up creating their own customized chaining solutions. One of the key strengths of Lang chain lies in its extensive repository, which houses numerous examples of different chaining patterns. These examples not only facilitate idea generation but also serve as valuable resources for learning and gaining insights into effective chaining techniques. Whether for rapid prototyping or constructing production-grade systems, Lang chain strikes a balance between ease of use and flexibility, empowering developers to effortlessly create their own chaining systems when needed.
The building block of Langchain are chains. Chains can be simple/generic or specialized. One simple chain is a generic chain that contains a single LLM. Generic chain takes a prompt and uses LLM for text generation based on the prompt. Let’s see how to achieve a simple chain using OpenAI’s gpt-3.5 turbo model.
import os
os.environ[""OPENAI_API_KEY""] = ""...""
from langchain.prompts import PromptTemplate
template = """"""
Who won the oscar for the best actor in a leading role on {year}?
""""""
prompt = PromptTemplate(
input_variables=[""year""],
template=template,
)
print(prompt.format(year=2012))
Output: Who won the oscar for the best actor in a leading role on 2012?
PromptTemplatehelps to design prompt for your tasks and you can provide input variables if you want like below:
template = """"""
Who won the oscar for the best {role} on {year}?
""""""
While creating a prompt template for multiple variables, you need to pass all those variables ininput_variablesargument
prompt = PromptTemplate(
input_variables=[""role"", ""year""],
template=template,
)
Tools
The another way to give LLMs access to outside world is to let them use tools.
Using Tools in Langchain
Tools are the flexible way to augment language model with external data. There are two ways to build tools into language models. First way is to manually create chains whereas the later one is the use of plugins and letting the model figure it out. Some example tools that can be use includes Arxiv, Bash, Bing Search, Google, etc.
Tools can be used in langchain using following code snippet (in Python):
from langchain.agents import load_tools
tool_names = [...]
tools = load_tools(tool_names)
tools
You can name the tools that you are going to use and load them using load_tools methods
Let’s use Python’s requests module as a tool to extract data from the web
from langchain.agents import load_tools
tool_names = ['requests_all']
requests_tools = load_tools(tool_names)
requests_tools
Output:
[
RequestsGetTool(name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPostTool(name='requests_post', description='Use this when you want to POST to a website.\n Input should be a json string with two keys: ""url"" and ""data"".\n The value of ""url"" should be a string, and the value of ""data"" should be a dictionary of \n key-value pairs you want to POST to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the POST request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPatchTool(name='requests_patch', description='Use this when you want to PATCH to a website.\n Input should be a json string with two keys: ""url"" and ""data"".\n The value of ""url"" should be a string, and the value of ""data"" should be a dictionary of \n key-value pairs you want to PATCH to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the PATCH request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPutTool(name='requests_put', description='Use this when you want to PUT to a website.\n Input should be a json string with two keys: ""url"" and ""data"".\n The value of ""url"" should be a string, and the value of ""data"" should be a dictionary of \n key-value pairs you want to PUT to the url.\n Be careful to always use double quotes for strings in the json string.\n The output will be the text response of the PUT request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsDeleteTool(name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None))
]
Each tool inside therequest_alltool contains a request wapper. We can directly work with these wrappers as below:
requests_tools[0].requests_wrapper
Output:
TextRequestsWrapper(headers=None, aiosession=None)
We can useTextRequestsWrapperto create a request object and use the object to extract data from the web.
from langchain.utilities import TextRequestsWrapper
requests = TextRequestsWrapper()
requests.get(""https://reqres.in/api/users?page=2"")
Output:
'{""page"":2,""per_page"":6,""total"":12,""total_pages"":2,""data"":[{""id"":7,""email"":""[email protected]"",""first_name"":""Michael"",""last_name"":""Lawson"",""avatar"":""https://reqres.in/img/faces/7-image.jpg""},{""id"":8,""email"":""[email protected]"",""first_name"":""Lindsay"",""last_name"":""Ferguson"",""avatar"":""https://reqres.in/img/faces/8-image.jpg""},{""id"":9,""email"":""[email protected]"",""first_name"":""Tobias"",""last_name"":""Funke"",""avatar"":""https://reqres.in/img/faces/9-image.jpg""},{""id"":10,""email"":""[email protected]"",""first_name"":""Byron"",""last_name"":""Fields"",""avatar"":""https://reqres.in/img/faces/10-image.jpg""},{""id"":11,""email"":""[email protected]"",""first_name"":""George"",""last_name"":""Edwards"",""avatar"":""https://reqres.in/img/faces/11-image.jpg""},{""id"":12,""email"":""[email protected]"",""first_name"":""Rachel"",""last_name"":""Howell"",""avatar"":""https://reqres.in/img/faces/12-image.jpg""}],""support"":{""url"":""https://reqres.in/#support-heading"",""text"":""To keep ReqRes free, contributions towards server costs are appreciated!""}}'
References
- Full Stack Deep Learning (LLM Bootcamp)
- Langchain",Augmenting Large Language Models: Expanding Context and Enhancing Relevance
https://nepalprabin.github.io./posts/2020-12-08-self-supervised-learning.html,"Self-supervised Learning
I have been exploring self-supervised learning and been through papers and blogs to understand it. Self-supervised learning is considered the next big thing in deep learning and why not! If there is a way to learn without providing labels, then this enables us to leverage a large amount of unlabeled data for our tasks. I am going to provide my understanding of self-supervised learning and will try to explain some papers about it.
We have been familiar with supervised learning wherein we provide features and labels to train a model and the model uses those labels to learn from the features. But labeling data is not an easy task as it requires more time and manpower. There is a large amount of data being generated daily and is unlabeled. The generated data may be in the form of text, images, audio, or videos. Those data can be used for different purposes. But there is a catch. These data do not contain labels and it difficult to work on these sorts of data. Here comes self-supervised learning to the rescue.
So what is self-supervised learning and why is it needed?
Self-supervised learning is a learning framework that does not use human-labeled datasets to learn a visual representation of the data also known as representation learning. We have been familiar with the task such as classification, detection, and segmentation where a model is trained in a supervised manner which is later used for unseen data. These tasks are normally trained for specific scenarios, for e.g, the ImageNet dataset contains 1000 categories and can only recognize those categories. For categories that are not included in the ImageNet dataset, new annotations need to be done which is an expensive task. Self-supervised makes learning easy as it requires only unlabeled data to formulate the learning task. For training models in a self-supervised manner with unlabeled data, one needs to frame a supervised learning task (also known as*pre-text task). These pre-text tasks can later be used for**downstreamtasks such as image classification, object detection, and many more.
*Pre-text task: These are the tasks that are used for pre-training**Downstream task: These are the task that utilizes pre-trained model or components that can be used to perform tasks such as image recognition, segmentation.
A general pipeline of self-supervised learning (source)
Self-supervised Techniques for Images
Many ideas have been proposed for self-supervised learning on images. A more common methodology or workflow is to train a model in one or multiple pretext tasks with the use of unlabeled data and use that model to perform downstream tasks. Some of the proposed ideas of self-supervised techniques for images are summarized below:
Rotation
To learn representation by predicting image rotations,Gidaris et al.proposed an architecture where features are learned by training Convolution Nets to recognize rotations that are applied to the image before feeding to the network. This set of geometric transformations defines the classification pretext task that the model has to learn which can later be used for downstream tasks. Geometric transformation is made such that the image is rotated through 4 different angles (0, 90, 270, and 360). This way, our model has to predict one of the 4 transformations that are done on the image. To predict the task, our model has to understand the concept of objects such as their location, their type, and their pose.
Illustration of the self-supervised task proposed for semantic feature learning.Given four possible geometric transformations, the 0, 90, 180, and 270 degrees rotations,a ConvNet model was trained to recognize the rotation that is applied to the image that it gets as input. (source)
More details at:Unsupervised Representation Learning By Predicting Image Rotations
Exemplar
In Exemplar-CNN (Dosovitskiy et al., 2015), a network is trained to discriminate between a set of surrogate classes. Each surrogate class is formed by applying random data augmentations such as translation, scaling, rotation, contrast and color shifts. While creating surrogate training data:
- N patches of size 32 x 32 pixels are randomly sampled from different images at varying positions. Since, we are interested in patches objects or parts of objects, random patches are sampled only from region containing considerable gradients.
- Each patch is applied with a variety of image transformations. All the resulting transformed patches are considered to be in same surrogate classes.
The pretext task is to discriminate between the set of surrogate class.
Several random transformations applied to one of thepatches extracted from the STL unlabeled dataset. The originalpatch is in the top left corner (source)
More details at:Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks
Jigsaw Puzzle
Another approach of learning visual representation from unlabeled dataset is by training a ConvNet model to solve Jigsaw puzzle as a pretext task which can be later used for downstream tasks. In Jigsaw puzzle task, model is trained to place 9 shuffled patches back to the original position. To place shuffled patches to original position,Noroozi et al.proposed a Context Free Network (CFN) which is a siamese CNN that uses shared weights. The patches are combined in a fully connected layer.
Learning image representations by solving Jigsaw puzzles.
- The image from which the tiles (marked with green lines) are extracted.
- A puzzle obtained by shuffling the tiles.
- Determining the relative position between the central tile and the top two tiles from the left can be very challengingsource
- The image from which the tiles (marked with green lines) are extracted.
- A puzzle obtained by shuffling the tiles.
- Determining the relative position between the central tile and the top two tiles from the left can be very challengingsource
From the set of defined puzzle permutations, one permutation is randomly picked to arrange those 9 patches as per that permutation. This results CFN to return a vector with a probability value for each index. Given those 9 tiles, there will be 9! = 362,880 possible permutations. This creates difficulty in jigsaw puzzles. To control this, the paper proposed to shuffle patches according to a predefined set of permutations and configured the model to predict a probability vector over all the indices in the set.
Context Free Network Architecture (source)
More details at:Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
Relative Patch Location
This approach byDoersch et al, predicts position of second patch of the image that is relative to the first patch. For this pretext task, a network is fed with two input patches and is passed through several convolutional layers. The network produces an output with probability to each of eight image patches. This can be taken as a classification problem with 8 classes where the input patch is assigned to one of these 8 classes to be considered as relative patch to the input patch.
The algorithm receives two patches in one of these eightpossible spatial arrangements, without any context, and must thenclassify which configuration was sampled (source)
More details at:Unsupervised Visual Representation Learning by Context Prediction
References:
- The Illustrated Self-Supervised Learning
- Self-supervised Learning
- Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey",Self-supervised Learning
https://nepalprabin.github.io./posts/2020-08-15-deep-convolutional-general-adversarial-networks-dcgans.html,"Deep Convolutional Generative Adversarial Networks (DCGANs)
DCGAN (Deep Convolutional General Adversarial Networks) uses convolutional layers in its design.
Architectural Details for DCGAN
- Comprised convolutional network without max-pooling. Instead, it uses convolutional stride and transpose convolution for downsampling and upsampling respectively. To find out how pooling and convolutional stride differs please go throughthis.
- Removed all fully connected layers
- Used batch normalization to bring stability in learning. It is done by normalizing the input to have zero mean and a variance of one. Batchnormalization was added to all the layers except generator output layer and discriminator input layer
- ReLU activation is used in the generator except for the output layer which uses tanh activation function
- LeakyReLU activation is used at all layers in the discriminator
Training Generator in DCGAN
[latexpage]Generator takes a uniform noise distribution\(z\)as input. This input is reshaped with the help of fully connected layer into three dimensional layer with small base (width * height) and depth. Then, using transposed convolution, the output from previous layer is upsampled. Each transoposed convolution layer is followed by batch normalization to normalize the input. This helps in stabilizing the training of our GAN.
source
Details of Adversarial Training
DCGAN was trained on three datasets: Large-scale Scene Understanding (LSUN), ImageNet-1k and Faces dataset.
- Training images were scaled to the range of tanh activation function [-1, 1] and no further pre-processing was performed
- All models were trained with mini-batch size of 128
- Weights were initialized from normal normal distribution with mean 0 and standard deviation of 0.2
- Incase of LeakyReLU, the value of alpha was set to 0.2
- Adam optimizer was used for updating weights. Learning rate was set to 0.001 and momentum term was reduced to 0.5 from 0.9
Dataset Details
- DCGAN model was trained on LSUN bedroom dataset comprising over 3 million training images.
- No data augmentation was used
- De-duplication process was performed to decrease the likelihood of generator memorizing input examples. For this, autoencoder was trained to find and delete similar points from the training dataset. De-duplication process helped in removing 275k images.
Generated bedrooms after five epochs of trainingsource
Generated bedrooms after five epochs of trainingsource
References
- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
- Henry AI Labs",Deep Convolutional Generative Adversarial Networks (DCGANs)