OpenShift Installation And Configuration

open shift

An open shift is a family of containerization software developed by Red Hat. Its flagship product is the open shift Container Platform—an on-premises platform as a service built around docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to CentOS), open shift Online is the platform offered as software as a service, open shift Dedicated is the platform offered as a managed service, and OpenShift.io is an application development environment for the platform available online.

Stable release: 3.11 / October 2018; 4 months ago

Developed by: Red Hat Software

License: Apache License 2.0

Initial release: May 4, 2011; 7 years ago

Operating system: Red Hat Enterprise Linux or Container Linux by CoreOS

Written in: Go, AngularJS

OKD (The Origin Community Distribution of Kubernetes that powers Red Hat OpenShift)

Built around a core of OCI container packaging and Kubernetes container cluster management, OKD is also augmented by application lifecycle management functionality and DevOps tooling. OKD provides a complete open-source container application platform.

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift.

OpenShift is a platform-as-a-service (PaaS) product built on open-source software including Red Hat Enterprise Linux (RHEL), Kubernetes, Docker, etcd, and OpenvSwitch. From the HAproxy routing layer down through the kernel itself, there are opportunities to push the limits of scale and performance at each level in the product stack. This paper is meant to assist customers who are interested in deploying scalable OpenShift-based platform-as-a-service clusters. It includes best practices, tuning options, and recommendations for building reliable, performant systems at scale.

what is PaaS? (platform as a service)

Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. You purchase the resources you need from a cloud service provider on a pay-as-you-go basis and access them over a secure Internet connection.

PaaS accelerates application delivery

PaaS is a cloud application platform that automates the hosting, configuration, deployment, and administration of application stacks in an elastic cloud environment. It gives app developers self-service access so they can easily deploy applications on demand.

Like IaaS, PaaS includes infrastructure—servers, storage, and networking—but also middleware, development tools, business intelligence (BI) services, database management systems, and more. PaaS is designed to support the complete web application lifecycle: building, testing, deploying, managing, and updating.

PaaS allows you to avoid the expense and complexity of buying and managing software licenses, the underlying application infrastructure, and middleware, or the development tools and other resources. You manage the applications and services you develop and the cloud service provider typically manages everything else.

why open shift?

Red Hat OpenShift is designed to provide one thing for developers: ease of use without worry. OpenShift automates the build, deployment, and management of applications so that you can focus on delivering your next big idea.

Traditional and cloud-native. On-premises and in the cloud. No matter what you build, Red Hat® OpenShift® is the leading Kubernetes platform for delivering the extraordinary to your users, faster than you can imagine.

Openshift provides one of the eminent services in IT infrastructure called the PaaS (Platform as a service) which helps the user to focus only on their application regardless of environment and infrastructure as its manual implementation consumes a lot

OpenShift customers have shorter application development cycles, and deliver better quality software, with greater adoption among their users.

Build a true hybrid cloud container platform, and deploy OpenShift in the cloud or your own datacenter. Migrate applications seamlessly no matter where you run. 

OpenShift tests hundreds of technologies with Kubernetes and helps teams manage container security from the operating system through the application.

No two developers work in exactly the same way. With Red Hat OpenShift, developers can easily deploy applications using a library of supported technologies, so teams can choose the languages, frameworks, and databases they use to build and deploy their services. Our source-to-image (S2I) feature automatically builds new container images, making it easier to ship code faster. Organizations also have the ability to bring their own container images without using S2I.

Architecture

A sample architecture open shift implementation

OpenShift Marketecture


Thus the deployment can be made in any infrastructure 

Types of open shift release

open shift origin

OpenShift Origin, also known since August 2018 as OKD (Origin Community Distribution) is the upstream community project used in OpenShift Online, OpenShift Dedicated, and OpenShift Container Platform. Origin provides an open-source application container platform.

OpenShift Container Platform

Secure Kubernetes platform on your own infrastructure

OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat's on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux.

Build, deploy and manage your container-based applications consistently across cloud and on-premise infrastructure

OpenShift Container Platform handles cloud-native and traditional applications on a single platform. Containerize and manage your existing applications, modernize on your own timeline, and work faster with new, cloud-native applications.

Redhat open shift (online)

Quickly build, deploy, and scale in the public cloud

On-demand access to OpenShift to manage containerized applications

Delivered as a hosted service and supported by Red Hat

Deploy up to 8 services for free

REDHAT OPENSHIFT (Dedicated)

Professionally managed, enterprise-grade Kubernetes

Red Hat OpenShift Dedicated is a container application platform hosted on Amazon Web Services

(AWS) or Google Cloud Platform and managed by Red Hat. It allows application development teams to quickly build, deploy, and scale traditional and cloud-native applications. OpenShift Dedicated is built on Red Hat Enterprise Linux®, Docker container technology, and Google Kubernetes for orchestration and management. It securely connects to your datacenter so you can implement a flexible, hybrid cloud IT strategy with minimal infrastructure and operating expenses

Private, high-availability OpenShift clusters hosted on Amazon Web Services

Delivered as a hosted service and supported by Red Hat

OpenShift.io

Red Hat OpenShift.io is an end-to-end development environment for planning, creating, and deploying hybrid cloud services in less time, with better results. OpenShift.io extends the already powerful developer features of Red Hat OpenShift Online, a cloud-based container platform that enables developers to build, deploy, host, and scale container-based applications. OpenShift.io is open source and incorporates many projects including fabric8, Eclipse Che, OpenJDK, PCP, WildFly Swarm, Eclipse Vert.x, Spring Boot, and of course OpenShift.

Installation of open shift origin locally

In this module let us discuss the installation of open shift origin locally and deploy a sample application on it and glow it from the received endpoint URL

prerequisites for installing open shift origin locally

One master

Base OS: Fedora 21, CentOS 7, 7.3, RHEL 7.3 or RHEL 7.4 with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.4.5 or later.

Minimum 4 vCPU (additional are strongly recommended).

Minimum 8 GB RAM (additional memory is strongly recommended, especially if etcd is co-located on masters).

one node

Base OS: Fedora 21, CentOS 7, 7.3, RHEL 7.3 or RHEL 7.4 with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.4.5 or later.

Minimum 4 vCPU (additional are strongly recommended).

Minimum 8 GB RAM (additional memory is strongly recommended, especially if etcd is co-located on masters).

Steps to install 

Take two fresh machines one as master and one as a node in this module we are using two Centos 7 machines 

Once the necessities are satisfied perform a yum update in both Master and Node

In Master

[root@localhost ~]# yum update -y

In Node

[root@localhost ~]# yum update -y

Once completed set hostname on both machines

In Master

 [root@localhost ~]# hostnamectl set-hostname openshift.zippyops.com

In Node

[root@localhost ~]# hostnamectl set-hostname osnode1.zippyops.com

If the user has a static IP for the machines  it will be good or else they can use the auto-assigned IP which can be obtained through ifconfig but be careful as the public IP may vary 

 Move to /etc/hosts and put the host entry

Put the host entry in both Master and Node

[root@openshift ~]# vi /etc/hosts

Put the host entry in both Master and Node

In both Master and Node:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1  localhost.localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.179 openshift.zippyops.com   master

192.168.1.180 osnode1.zippyops.com     node1

Once completed performing the following command to check whether the installed kernel version is up to date 

In both Master and Node

rpm -q kernel --last | head -n 1 | awk '{print $1}' ; echo kernel-$(uname -r)

Ensure that the installed kernel version and the present kernel version are the same 

[root@openshift ~]# rpm -q kernel --last | head -n 1 | awk '{print $1}' ; echo kernel-$(uname -r)

kernel-3.10.0-957.5.1.el7.x86_64

kernel-3.10.0-957.5.1.el7.x86_64

[root@openshift ~]#


If they are not same “Reboot the machine” through ‘’reboot’’ command and both becomes neutral

After install the necessary packages that are needed to used across the play

In both Master and Node 

[root@openshift ~]# yum install -y wget git nano net-tools docker bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct openssl-devel httpd-tools NetworkManager python-cryptography python2-pip python-devel python-passlib  java-1.8.0-openjdk-headless "@Development Tools"

[root@osnode1 ~]# yum install -y wget git nano net-tools docker bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct openssl-devel httpd-tools NetworkManager python-cryptography python2-pip python-devel python-passlib  java-1.8.0-openjdk-headless "@Development Tools"

Once installing perform the following commands to create epel repo and enable it and create a Openshift repo 

In Both Master and Nodes

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo

vim /etc/yum.repos.d/Openshift.repo

[openshift]

name=Centos-Openshift

baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin39/

gpgcheck=0

enabled=1

Once completed provide the following commands start and enable Netwwork Manager and docker

In Both Master and Nodes

systemctl start NetworkManager

systemctl enable NetworkManager

systemctl status NetworkManager

● NetworkManager.service - Network Manager

   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)

   Active: active (running) since Fri 2019-03-01 10:56:56 IST; 4h 55min ago

     Docs: man:NetworkManager(8)

 Main PID: 4204 (NetworkManager)

    Tasks: 3

   Memory: 40.0K

   CGroup: /system.slice/NetworkManager.service

           └─4204 /usr/sbin/NetworkManager --no-daemon

For Docker also

systemctl start docker

systemctl enable docker

systemctl status docker                                                                         

  ● docker.service - Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)

  Drop-In: /etc/systemd/system/docker.service.d

           └─custom.conf

   Active: active (running) since Fri 2019-03-01 10:57:56 IST; 5h 2min ago

     Docs: http://docs.docker.com

 Main PID: 4893 (dockerd-current)

    Tasks: 103

   Memory: 56.9M

   CGroup: /system.slice/docker.service

           ├─ 4893 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/li...

As we are going to deploy Openshift with the help of the Ansible tool

For that, we need to install ansible with version among 2.6.0 to 2.6.6

As the current ansible 2.7.7 version consists of some glitches causing error while running playbooks 

In Master alone

To download Ansible particular version of 2.6.6

[root@openshift ~]# sudo easy_install pip

[root@openshift ~]# sudo pip install 'ansible==2.6.6'

After that, there is an official release of Openshift-Ansible in GitHub clone it to your local 

[root@openshift ~]# git clone https://github.com/openshift/openshift-ansible.git

As we are going to install Openshift 3.9 edition so perform the following command to switch from the master branch to Openshift 3.9 version

cd openshift-ansible && git fetch && git checkout release-3.9

After that perform the command 

[root@openshift openshift-ansible]# git branch

  master

* release-3.9

[root@openshift openshift-ansible]#

Ensure that all the command must be executed in the Master node only 

Now connect the node with the master through ssh

From Both Master and Node

[root@openshift openshift-ansible]# ssh-keygen -f ~/.ssh/id_rsa -N ''

Once performing sh-keygen copy the id_rsa.pub key to the node through the following command

[root@openshift openshift-ansible]# for host in openshift.zippyops.com osnode1.zippyops.com ; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done

Once the key is added perform an ssh root@hostname command to check whether the connection gets established

[root@openshift openshift-ansible]# ssh [email protected]

Last login: Fri Mar  1 15:20:30 2019 from 192.168.1.3

[root@osnode1 ~]#

Thus perform the same functions from the node 

[root@osnode1 ~]# ssh [email protected]

The authenticity of host 'openshift.zippyops.com (192.168.1.179)' can't be established.

ECDSA key fingerprint is SHA256:pZy/mGryhC4p1OwJNAztX+Vx39MIPxQlwNnaSSMU6H4.

ECDSA key fingerprint is MD5:87:67:f9:15:57:9b:1d:78:21:fd:2f:72:72:3e:f1:cc.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'openshift.zippyops.com' (ECDSA) to the list of known hosts.

[email protected]'s password:

Last login: Fri Mar  1 15:21:21 2019 from 192.168.1.3

[root@openshift ~]#

Thus the Master and Node has been connected through ssh now come to Master node and move to Openshift-Ansible directory  

The Rest Steps all are done in Master only

Now this is the most important sector “writing an inventory file” on this basis the ansible-playbook gets ran identifying the hosts

Move to open shift directory which u cloned from git

[root@openshift ~]# cd openshift-ansible/

Open a file called inventory.ini

[root@openshift openshift-ansible]# vi inventory.ini

Paste the following contents into that inventory file

Note: - Replace the “IP Address” and “Hostname” with your Master and Node “IP Address” and “Hostname”

[OSEv3:children]

masters

nodes

etcd

[masters]

192.168.1.179 openshift_ip=192.168.1.179


[etcd]

192.168.1.179 openshift_ip=192.168.1.179

[nodes]

192.168.1.179 openshift_ip=192.168.1.179 openshift_schedulable=true openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

192.168.1.180 openshift_ip=192.168.1.180 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

[OSEv3:vars]

debug_level=4

ansible_ssh_user=root

openshift_ip_check=False

enable_excluders=False

enable_docker_excluder=False

openshift_enable_service_catalog=False

ansible_service_broker_install=False


containerized=True

os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability


openshift_node_kubelet_args={'pods-per-core': ['10']}

openshift_clock_enabled=true


deployment_type=origin

openshift_deployment_type=origin

openshift_release=v3.9.0

openshift_pkg_version=-3.9.0

openshift_image_tag=v3.9.0

openshift_service_catalog_image_version=v3.9.0

template_service_broker_image_version=v3.9.0

osm_use_cockpit=true

Once completed provide permission to that file

[root@openshift openshift-ansible]# chmod 777 inventory.ini


Now there is a problem in the YAML script in one of the playbook

to solve the error 

[root@openshift openshift-ansible]# cd roles/

[root@openshift roles]# cd os_firewall

[root@openshift os_firewall]# ls

defaults  README.md  tasks

[root@openshift os_firewall]# cd tasks

[root@openshift tasks]# ls

firewalld.yml  iptables.yml  main.yml

[root@openshift tasks]#

Here open the iptables.yml file

[root@openshift tasks]# vi iptables.yml

The file will look like this

The thing is the “Red marked scripts” should be placed at the bottom

---

- name: Ensure firewalld service is not enabled

  systemd:

    name: firewalld

    state: stopped

    enabled: no

    masked: yes

  register: task_result

  failed_when:

    - task_result is failed

    - ('could not' not in task_result.msg|lower)

- name: Wait 10 seconds after disabling firewalld

  pause:

    seconds: 10

  when: task_result is changed

- name: Install iptables packages

  package:

    name: "{{ item }}"

    state: present

  with_items:

    - iptables

    - iptables-services

  when: not r_os_firewall_is_atomic | bool

  register: result

  until: result is succeeded

- name: Start and enable iptables service

  systemd:

    name: iptables

    state: started

    enabled: yes

    masked: no

    daemon_reload: yes

  register: result

  delegate_to: "{{item}}"

  run_once: true

  with_items: "{{ ansible_play_batch }}"

- name: need to pause here, otherwise, the iptables service starting can sometimes cause ssh to  fail

  pause:

    seconds: 10

  when: result is changed

Like this

---

- name: Install iptables packages

  package:

    name: "{{ item }}"

    state: present

  with_items:

    - iptables

    - iptables-services

  when: not r_os_firewall_is_atomic | bool

  register: result

  until: result is succeeded

- name: Start and enable iptables service

  systemd:

    name: iptables

    state: started

    enabled: yes

    masked: no

    daemon_reload: yes

  register: result

  delegate_to: "{{item}}"

  run_once: true

  with_items: "{{ ansible_play_batch }}"

- name: need to pause here otherwise, the iptables service starting can sometimes cause ssh to fail

  pause:

    seconds: 10

  when: result is changed

- name: Ensure firewalld service is not enabled

  systemd:

    name: firewalld

    state: stopped

    enabled: no

    masked: yes

  register: task_result

  failed_when:

    - task_result is failed

    - ('could not' not in task_result.msg|lower)

- name: Wait 10 seconds after disabling firewalld

  pause:

    seconds: 10

  when: task_result is changed

This is because once the firewall service is disabled the user could not load the iptables service and the playbook will be paused in between to make the playbook run smoothly place the firewall disabling script at the bottom of the YAML file and save it

Now all the required process has been completed

Now go back to the open shift ansible directory where your inventory file lies

And run the following command to install the prerequisites in both Master and Node

[root@openshift openshift-ansible]# ansible-playbook -i inventory.ini playbooks/prerequisites.yml

Once the playbook ran successfully then provide the following command to deploy the cluster 

ansible-playbook -i inventory.ini playbooks/deploy_cluster.yml

[root@openshift openshift-ansible]# ansible-playbook -i inventory.ini playbooks/deploy_cluster.yml

Thus the cluster got deployed once the playbook runs successfully

Now the user can access the console through

HTTPS://:8443

Example:

https://openshshift.zippyops.com:8443 (or) https://192.168.1.179:8443

Once by running the URL the user can view the Openshift web console

But the user can’t access it because we have yet created a user to login 

To Create a User perform the following command

[root@openshift openshift-ansible]# htpasswd /etc/origin/master/htpasswd zippyops

New password:

Re-type new password:

Adding password for user zippyops

[root@openshift openshift-ansible]#

Here I am creating a user called “zippyops”

The user can create a user with their own desired name with a password

Thus the user has been created now go back to the console and provide the username and password

Thus the console looks like this 

And there are several environments to deploy our application



To deploy your application the user needs to have their source files in git through git open shift

Click on create the project

And provide the name of the project and its display name and click create

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_1.png

Thus the user can the project created there to configure it click on it

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_2.png

Now click on browse catalog to choose which platform the user gonna deploy an application

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_3.png

As of now we gonna deploy a PHP application so click on the PHP catalog

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_4.png

Now click on next

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_5.png


Now this is our sample PHP application


Copy the URL

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_7.png

Paste it here and click on create

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_8.png

Click on close

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_9.png

Click on overview

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_10.png


And click on the project the user can see the application got to build and see one pod is running

C:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Users\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\zippyops\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Desktop\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\screenshot 2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Screenshot_11.png


Now to access it through URL

Go to Cli in Master

[root@openshift ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.179 openshift.zippyops.com master

192.168.1.180 osnode1.zippyops.com node1

192.168.1.179 helloworld-demo.osmaster1.zippyops.com

And Now by clicking on the endpoint URL, the user could see the application


The user could also run the allotted IP with 8080 attached to view the application



Try these sample repo from git for different types of languages that can be deployed over the open shift

For .net try these repo and customize according to the users choice 

https://github.com/redhat-developer/s2i-dotnetcore-ex.git

Now our application gets built


The user could view the progress

Thus the build is successful and application is pushed to one of the pod 


Thus after creating the host entries the user could view the application through the URL


A sample Python application

https://github.com/sclorg/django-ex.git


Thus the python application pushed to the pod 


After the host entries, the endpoint URL will display the application



A sample ruby application


Thus the ruby application pushed to the pod


Once after the host entries, the Application can get glowed from the endpoint URL

A sample Perl application

Thus the application is pushed to the pod and it's running successfully


Once the host entries are finished click on the endpoint URL to view the application


Sample Node.js application 

Thus the build is initiated


Thus the application gets running in a pod


Once the host entries are finished click on the endpoint URL to view the application

Recent Comments

No comments

Leave a Comment