Tuesday, July 9, 2024

Visit to Prani (The Pet Sanctuary)

Visit to PRANI, a mini zoo located in Bannerghatta, Bengaluru.

Sunday, the 20th August, 2023

It was a cool, nice morning. My parents had already got up and prepared the breakfast. My younger brother, Shivaan, and myself were in the mood to go on a short holiday trip. Fortunately, to our surprise, our parents had already decided to visit PRANI, the pet sanctuary nestled a few kilometers away somewhere in Bannerghetta.

We, therefore, had a quick breakfast at our grandparents’ house nearby. Our grandparents also enthusiastically joined us and gave us company. All six of us started at 9.45 a.m.

A search on Google showed that our destination is about one and a half hours’ drive from our home on Panathur main Road, Bengaluru.

Both of us children were impatient to reach our destination.

We reached PRANI around11.15 a.m. It is nestled in the midst of green, leafy trees and many bushes. My father stopped the vehicle comfortably as not many visitors had arrived by that time, in spite of the fact that it was a Sunday and a holiday.

When we presented ourselves at the entrance gate, the person at the gate checked whether we had already booked our visit. My father had already booked the entry tickets for all of us. We paid Rs.2400/- @ Rs.400/- per head and secured entry. The woman in charge of the entry gate asked my and my brother’s names. She asked us whether I would be leading the group, and I replied in the positive. She cautioned me that she would ask me details of the visit while we come back after winding up the visit. She wished us a joyful and informative visit.

PRANI has been set-up to educate children as well as grown-ups about the nature, behavioral pattern, and living as well as eating habits of animals and birds.

 

We thereafter rushed straight into the first pet house.

It is house to different rodent families like rats, mouses, moles guinea pigs etc. we were allowed to touch and feel them. Although rats generally live in filthy and dirty environment, the inmates in PRANI are kept in excellent environment and are clean and well kept. Rats normally bite on anything and everything they find. You may wonder: why? This is because it is a natural instinct for them to bite everything that comes their way! They have upper as well as lower pairs of ever-growing incision teeth. These teeth keep growing rapidly. The irritating sensation that occurs during the process of growth of their teeth prompts them to keep biting something or the other so that their teeth are maintained in a very sharp shape.

The guinea pigs, belonging to the rat family, are originally natives of South America. They appear very cute with a robust body, short limbs, large head as well as eyes and short ears. They have a life span of about eight years. Their body furs are white, cream, reddish or of a combined pattern.

After spending about twenty minutes with the rodents, we moved to the next enclosure, where a mare (a female horse) and two donkeys are kept. The name of the very well grown-up mare is Maaya. The horse is a one-toed, hoofed mammal. Horses have strong, sturdy and compact body adapted to run fast. Maaya is a quite well behaved mare. Like other fellow visitors, we too fed her with green wheat leaves, which she enjoyed.

Next to Maaya, the mare, were two donkeys. Both of them are Krishnagiri donkeys. They were quite healthy. They weigh about 100 to 150 kilograms. The donkeys are generally used as working animals. They help their owners to carry loads. They have a life expectancy of about 12 to 15 years. Although most of the visitors milled around the mare, not many went near the donkeys either to feed them or to appreciate them.

In the next enclosure were a few goats. Both male and female goats were there. Most of them were white in colour. They are quite cute and gentle in nature and made friends with us quickly. We patted them and took photos with them. It is said that goats have a life span of about 15 to 18 years. They have an excellent sense of smell. An interesting fact about the goats is that it is a cud-chewing mammal.

Next in line were the peacocks. As you know, the peacock is our national bird too. Due to a variety of reasons, they cannot fly long distances like other birds. Their charming beauty, displayed when the male peacock dances, is beyond the words. The shining blue breast and neck and the splendid bronze green tail are the peacock’s main features. Their stunning beauty is a symbol of their dominance and the reason to attract the female of the species to them.

A few healthy and beautiful ducks were swimming in the pond filled with clear water. It appeared that they were playing around after a sumptuous breakfast! These waterfowls are normally seen in ponds and other water bodies like lakes and rivulets.

Next, we were taken to an enclosure where the rabbits are kept. There were two types of rabbits:  the European (Cinnamon) rabbit  and the hare.  It was a surprise to me to learn that,contrary to our common belief, carrot is not their favourite food. They like to eat green grass and leafy vegetables.

We also saw a few monitor lizards. They are considered to be the most intelligent of all lizards. It was told to us that their natural habitat is coastal forests and river beds. There are more than 70 species of monitor lizards.They are native to African and Asian countries. Their colour varied from brown to green and grey with scales on their skin.

 There are a few Emus walking on their long legs. They are large flightless birds found living all over Australia. They are the second largest birds in the world after the ostrich. They have large bodies,strong legs and a long neck. Although they have wings, they are not suitable for flying. Emus are omnivores.That means they eat both plants and animals. However, their diet is primarily made up of plant material,grass and seeds. Female emus lay 5 to 15 eggs, and sit on the eggs for about eight weeks incubating them.

 We also went inside the aviary. The aviary is quite big with a great number of inmates of various colours and sizes.

The last was the visit to the aquarium. 

The aquarium of PRANI is very small. It does not have many fishes in it. But, the fish in the aquarium were very colourful and active. The way they swim is very interesting. Their movements are very organised and systematic. Observing the movements of fishes in an aquarium for longer periods is not only interesting and amusing,  it also helps humans to calm their mind. 

 After spending about more than two hours among the birds, Emus, rodents etc, we said good-bye to them.

We came back after a fruitful visit to the PRANI….

I would recommend a visit to this home of birds and animals so that we keep in touch with our fellow creatures with whom we share our planet.

                                               @@@

By Srihan Rajesh

Grade 3-B 

Wednesday, September 30, 2020

Create Pipelines for AI DevOps



Q1. What are pipelines ?

A1.

A circular hose .. which runs miles and carries water, oil, waste water .. ehh ??? Okay let me re-phrase .. what are DevOps Pipelines ?? Aah

DevOps Pipelines are a flow methodology which adhere to certain operations in a serial manner.

A DevOps pipeline is a set of practices that the development (Dev) and operations (Ops) teams implement to build, test, and deploy software faster and easier. One of the primary purposes of a pipeline is to keep the software development process organized and focused.

Actually Pipeline seems a bit misleading here .. unless pipes are meant to be broken and junctions can be fitted .. an assembly line would be apt ! But yeah AI DevOps Assembly Line wont rhyme good ;)

This also means that similar to Software Development, we have a continuously running an improving assembly line. At each stage of an assembly line (chassis making, fitting engines, fitting gearbox, painting etc), each task is refined to be the best. Its then assembled and put in a pipelined manner to ensure the best product in the end.

Q2. So how many tasks (like chassis making, fitting engines, fitting gearbox, painting etc) are you planning to do here ?

A2. Well an analogy for me here would be :

a) Developer writes a ML code - Unit tests it.
b) ML code again needs to get a load of data (read training set) - Step a) unit test takes a small subset.
c) Dockerize the code along with all dependencies and create a container.
d) By dependency I would include the data set (It could again be imported from Keras)
e) Deploy the container onto Kubeflow.

When you develop and deploy an ML system, the ML workflow typically consists of several stages. Developing an ML system is an iterative process. You need to evaluate the output of various stages of the ML workflow, and apply changes to the model and parameters when necessary to ensure the model keeps producing the results you need.


In our experiment, we would be creating a Model for identifying fashion apparels. The training data set would be loaded from Keras (classification.py), this would create a Model [mymodel.h5] and another python program replicating production environment would compare its test data against this Model and derive an intelligent output.



Q3. Now what on earth is that ? Kubeflow ?

A3. Kubeflow is a free and open-source machine learning platform designed to enable using machine learning pipelines to orchestrate complicated workflows running on Kubernetes.

Q4. Very well .. are we in an era of Ekta Kapoor .. what next "Kyunki Saab Bhi Kabhi Bahu Thi." ? Why is everything starting with "K" .. what is Kubernetes ?? Keras ??

A4. Kubernetes (commonly stylized as K8s) is an open-source container-orchestration system for automating computer application deployment, scaling, and management.

Keras is an open-source neural-network library written in Python.

Great now .. "Kuch kuch samajh aata hain :)"

Its like .. say I have had a laptop with Ubuntu OS running. (aka laptop -Kubernetes replacing the hardware with a virtual hardware) and then I have a software process called Kubeflow which would help spawning the multiple K8 instances to spawn or realize multiple deployements.

Keras just came in-between to help develop code in ML for Python.

This is how my folder structure would look like :

More interested in the orange boxed files.

Pipeline/pipeline2/component1/tf/classification.py

from __future__ import absolute_import, division, print_function, unicode_literals

# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras

# Helper libraries
import numpy as np
import matplotlib.pyplot as plt

#Printing the Tensorflow version
print(tf.__version__)

# Getting fashion mnist dataset from Keras
# Splitting datasets into Test and Train datasets
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()


#Labelling the datasets
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

train_images.shape
....
....
# Grab an image from the test dataset.
img = test_images[1]

# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))

predictions_single = model.predict(img)

plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)

np.argmax(predictions_single[0])

#Saving model
model.save('mymodel.h5')
predictions[9]
np.argmax(predictions[9])


Pipeline/pipeline2/component1/Dockerfile



FROM tensorflow/tensorflow:nightly-py3-jupyter
RUN apt-get update &&\
apt-get upgrade -y &&\
apt-get install -y git && \
pip install matplotlib

COPY . .

ENV ip "XX.XX.XXX.XXX"             // Replace with your VM IP, where the local GitLab ought to run.
ENV user "root"                                   // Replace with user (else root is you use public-keys)  [More on how to create root user below]
ENV passwd "xxxxx"                           // Replace with the user password or root passwd  

CMD /usr/bin/git config --global user.name "root" && \
/usr/bin/git config --global user.email "rajesh.bhaskaran@aricent.com" && \
/usr/bin/git clone http://${user}:${passwd}@${ip}:30080/${user}/tensorflow.git && \
/usr/bin/python3 ./tf/classification.py && cd tensorflow/ && cp ../mymodel.h5 . && \
/usr/bin/git pull --allow-unrelated-histories && \
/usr/bin/git add mymodel.h5 && \
/usr/bin/git commit "-m" "Commit 1.0" && \

/usr/bin/git push http://${user}:${passwd}@${ip}:30080/${user}/tensorflow.git

>>  [More on how to create root user here]

rajeshb@AzureUbuntu:~$ sudo passwd            // This would create and set password for a root user
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

This dockerizes the program into a container and also has the model -> mymodel.h5 file created thus pushed in the GIT Lab repo. 

Pipeline/pipeline2/component2/tf/prediction.py

from __future__ import absolute_import, division, print_function, unicode_literals
import os
import tensorflow as tf
from tensorflow import keras

#Load model
new_model = tf.keras.models.load_model('mymodel.h5')
fashion_mnist = keras.datasets.fashion_mnist

# Load Inference data
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images / 255.0

test_images = test_images / 255.0

# Getting model metrics
test_loss, test_acc = new_model.evaluate(test_images,  test_labels, verbose=2)

#Print Test accuracy
print('\nTest accuracy:', test_acc)

Pipeline/pipeline2/component2/Dockerfile

FROM omie12/image-class:5.0
RUN pip install -q pyyaml h5py && \
     rm -rf tensorflow

COPY . .

# Settings same as the above Dockerfile
ENV ip "xx.xx.xx.xxx"        
ENV user "xxx"
ENV passwd "xxxxxx"

CMD /usr/bin/git clone http://${user}@${ip}:30080/${user}/tensorflow.git && cd tensorflow  && ls  && /usr/bin/python3 ../tf/prediction.py

Setting the other components of this assembly line :

1. GIT Lab : Running it locally from Kubernetes

The GitLab Docker images are monolithic images of GitLab running all the necessary services in a single container.

Following commands allows running a local instance of GitLab :

rajeshb@AzureUbuntu:~$ sudo docker run --detach --name gitlab --hostname localhost --publish 30080:30080 --publish 30022:22 --env GITLAB_OMNIBUS_CONFIG="external_url 'http://localhost:30080'; gitlab_rails['gitlab_shell_ssh_port']=30022;" gitlab/gitlab-ce:9.1.0-ce.0


>> Open GIT Lab local on : http://VM-IP:30080
>> Click on New Project to create a project named tensorflow (in our case)





>> Create and commit a README file in the new project.



2. Jupyter Notebook Server

Jupyter is a open-source service for interactive computing across dozens of programming languages". Spun off from IPython in 2014 by Fernando Pérez, Project Jupyter supports execution environments in several dozen languages. We can use Jupyter to run the Python script to generate the pipeline yaml file. 

Jupyter is part of the Kubeflow platform. The below 2 steps should make it up and running : 

rajeshb@AzureUbuntu:~/kubeflow$ pwd
/home/rajeshb/kubeflow
rajeshb@AzureUbuntu:~/kubeflow$ sudo minikube start --vm-driver none
>    Minikube should start successfully !!

rajeshb@AzureUbuntu:~/kubeflow$ kfctl apply -V -f kfctl_k8s_istio.v1.0.2.yaml
> All YAML processes should be deployed successfully !!!
> INFO[0014] Applied the configuration Successfully!  < -- Should be seen

Possible errors : Permission denied 
Solution :  rajeshb@AzureUbuntu:~/kubeflow$ sudo chown -R $USER:$USER ~/.minikube/

This would set the ownership of the hidden folder .minikube to be under self user.

Step 1 : Create the Pipeline 

>    Create a Jupyter Notebook to run the Python code. Open the Kubeflow Dashboard





Launch the Notebook with the above configurations




#pip install kfp==1.0.1 --user
import kfp
from kubernetes.client.models import V1EnvVar

@kfp.dsl.component
def my_component1():
...
return kfp.dsl.ContainerOp(
name='<docker image name>', // Component 1 docker name 
image='<docker image name>:<tag>' // Component 1 docker name:tag 
)

@kfp.dsl.component
def my_component2():
...
return kfp.dsl.ContainerOp(
name='<docker image name>', // Component 2 docker name 
image='<docker image name>:<tag>' // Component 2 docker name:tag
)

#Defining the pipeline that uses the above components.
@kfp.dsl.pipeline(name='pipeline2', description='mnist tf model')
def model_generation():
component1 = (my_component1()
.add_env_variable(V1EnvVar(name = 'ip', value = 'xx.xx.xxx.xx')) // VM IP 
.add_env_variable(V1EnvVar(name = 'user', value = 'xxxx' )) // VM User / root 
.add_env_variable(V1EnvVar(name = 'passwd', value = 'xxxxxx' )) // VM User / root password
)

component1.container.set_image_pull_policy("IfNotPresent")
component2 = my_component2().after(component1)
component2.container.set_image_pull_policy("IfNotPresent")

pipeline_func = model_generation
pipeline_filename = pipeline_func.__name__ + '.pipeline2.zip'

#Compile the pipeline and export it
kfp.compiler.Compiler().compile(model_generation,pipeline_filename)




In case of missing package kfp, uncomment and run 
>> pip install kfp==1.0.1 --user 
--Inside 1 dialog in the start--

This would generate the pipeline YAML file in a .zip format in the Jupyter Home page 


Download the same. The model_generation.pipeline2.zip needs to be uploaded to the Kubeflow Dashboard --> Pipelines



Choose file -- Select model_generation.pipeline2.zip from the Download folder


>>  Click on Create Pipeline 

      -- Pipeline gets created in the Pipeline Homepage 

>> Click on the created Pipeline 


>> Create Run -- This would execute the Pipeline 

>> Logs can be seen in the web console.












Wednesday, September 16, 2020

Steps to create an Ubuntu VM in Azure

1.    Create a free account on Azure by creating a profile using the below URL

Ø  https://azure.microsoft.com/en-in/free/

Ø  Click on “Start Free” as below

Ø  Create your profile using your mail-ID



Image 1

2.    On the Azure portal, create a new virtual machine

Image 2


3.    Create a new Virtual Machine as below


Image 3

4.    Click on “Select size” to open up Image 5

Image 4

Search for “D4ds_v4” as the image

                                                        >>       To get Premium SSD

 5.    Choose default configurations under “Create new disk”
























































Image 9

 

 

 

 

 

 

 

 

 

10. Choose default values and click “Next: Networking”


Image 10

 

 

 

11. Choose default values and click “Next: Networking”


Image 11

12. Default values and click “Next for Advanced and Tags”


 


 


 

13. Click on “Create” on the final screen


 

14. Click on “Generate new key pair” à Download private key and create resource


Image ‘a’

 

 

 

15. This downloads a “.pem file” with the name of the OS image


 

16. Open Putty Gen to Load and Save this .pem key as a private key (Do not click on “Generate”)


Image ‘b’

 

 

17. Select Load and choose “All Files” and select the .pem that was downloaded earlier.

 


Image ‘c’

 



18. Press “Save Private Key” and save the key on the desktop.


 

 

19. Get the Public IP from the Azure Console


 

 

 

 

 

 

 

 

 

 

20. Create a Putty session with the Public IP


 

 

 

 

 

 

 

 

21. Add the SSH à Auth Key as below


22. Use the login name was created in Azure Console


Software Installations

1.      Docker Installation

 

Ø  https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04

 

1)      sudo apt-get update

2)      sudo apt-get remove docker docker-engine docker.io

3)      sudo apt autoremove

4)      sudo apt install docker.io

5)      sudo apt-get update

6)      sudo apt-get remove docker docker-engine docker.io

7)      sudo apt install docker.io

8)      sudo systemctl start docker

9)      sudo systemctl enable docker

 

rajeshb@AzureUbuntu:~$ docker --version

Docker version 19.03.6, build 369ce74a3c

2.      Kubectl installation

 

Ø  https://kubernetes.io/docs/tasks/tools/install-kubectl/     

rajeshb@AzureUbuntu:~$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 41.0M  100 41.0M    0     0   245M      0 --:--:-- --:--:-- --:--:--  244M

rajeshb@AzureUbuntu:~$ chmod +x ./kubectl

rajeshb@AzureUbuntu:~$ sudo mv ./kubectl /usr/local/bin/kubectl

rajeshb@AzureUbuntu:~$ kubectl version --client

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:26:42Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

 

3.      MINIKUBE Installation

 

Ø  https://kubernetes.io/docs/tasks/tools/install-minikube/

 

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube

 

rajeshb@AzureUbuntu:~$ sudo mkdir -p /usr/local/bin/

rajeshb@AzureUbuntu:~$ sudo install minikube /usr/local/bin/

rajeshb@AzureUbuntu:~$ sudo minikube version

minikube version: v1.13.0

commit: 0c5e9de4ca6f9c55147ae7f90af97eff5befef5f-dirty

rajeshb@AzureUbuntu:~$ sudo minikube start --vm-driver none

rajeshb@AzureUbuntu:~$ sudo minikube start --vm-driver none

* minikube v1.13.0 on Ubuntu 18.04

* Using the none driver based on user configuration

X Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.19.0 requires conntrack to be installed in root's path

rajeshb@AzureUbuntu:~$ sudo apt-get install conntrack

rajeshb@AzureUbuntu:~$ sudo minikube start --vm-driver none


rajeshb@AzureUbuntu:~$ sudo chown -R $USER $HOME/.kube $HOME/.minikube

rajeshb@AzureUbuntu:~$ sudo minikube status


 

 

rajeshb@AzureUbuntu:~$ mkdir kubeflow

 

rajeshb@AzureUbuntu:~$ wget https://github.com/kubeflow/kfctl/releases/download/v1.0.2/kfctl_v1.0.2-0-ga476281_linux.tar.gz -O $PWD/kubeflow/kfctl_linux.tar.gz

 

wget https://raw.githubusercontent.com/kubeflow/manifests/v1.0-branch/kfdef/kfctl_k8s_istio.v1.0.2.yaml  -O $PWD/kubeflow/kfctl_k8s_istio.v1.0.2.yaml

rajeshb@AzureUbuntu:~$ cd kubeflow

rajeshb@AzureUbuntu:~$ ls -lrt


rajeshb@AzureUbuntu:~$ vim ~/.bashrc

---

export PATH=$PATH:~/kubeflow/

---

rajeshb@AzureUbuntu:~/kubeflow$ kfctl apply -V -f kfctl_k8s_istio.v1.0.2.yaml

 


 

 


 

rajeshb@AzureUbuntu:~/kubeflow$ sudo kubectl get pod -n kubeflow


 

Open the URL again with http://<Public_IP>:31380