Pas Apicella

Subscribe to Pas Apicella feed
Information on VMware Tanzu : Build, run and manage apps on any cloud Pas Apicellahttp://www.blogger.com/profile/09389663166398991762noreply@blogger.comBlogger451125
Updated: 3 hours 32 min ago

How to Become a Kubernetes Admin from the Comfort of Your vSphere

Tue, 2020-10-27 17:18

 My Talk at VMworld 2020 with Olive power can be found here.

Talk Details

In this session, we will walk through the integration of VMware vSphere and Kubernetes, and how this union of technologies can fundamentally change how virtual infrastructure and operational engineers view the management of Kubernetes platforms. We will demonstrate the capability of vSphere to host Kubernetes clusters internally, allocate capacity to those clusters, and monitor them side by side with virtual machines (VMs). We will talk about how extended vSphere functionality eases the transition of enterprises to running yet another platform (Kubernetes) by treating all managed endpoints—be they VMs, Kubernetes clusters or pods—as one platform. We want to demonstrate that platforms for running modern applications can be facilitated through the intuitive interface of vSphere and its ecosystem of automation tooling

https://www.vmworld.com/en/video-library/search.html#text=%22KUB2038%22&year=2020

Categories: Fusion Middleware

java-cfenv : A library for accessing Cloud Foundry Services on the new Tanzu Application Service for Kubernetes

Wed, 2020-09-02 19:19

The Spring Cloud Connectors library has been with us since the launch event of Cloud Foundry itself back in 2011. This library would create the required Spring Beans from bound VCAP_SERVICE ENV variable from a pushed Cloud Foundry Application such as connecting to databases for example. The java buildpack then replaces these bean definitions you had in your application with those created by the connector library through a feature called ‘auto-reconfiguration’

Auto-reconfiguration is great for getting started. However, it is not so great when you want more control, for example changing the size of the connection pool associated with a DataSource.

With the up coming Tanzu Application Service for Kubernetes the original Cloud Foundry buildpacks are now replaced with the new Tanzu Buildpacks which are based on the Cloud Native Buildpacks CNCF Sandbox project. As a result of this auto-reconfiguration is no longer included in java cloud native buildpacks which means auto-configuration for the backing services is no longer available.

So is their another option for this? The answer is "Java CFEnv". This provide a simple API for retrieving credentials from the JSON strings contained inside the VCAP_SERVICES environment variable.

https://github.com/pivotal-cf/java-cfenv



So if you after exactly how it worked previously all you need to do is add this maven dependancy to your project as shown below.

  
<dependency>
<groupId>io.pivotal.cfenv</groupId>
<artifactId>java-cfenv-boot</artifactId>
</dependency>

Of course this new library is much more flexible then this and by using the class CfEnv as the entry point to the API for accessing Cloud Foundry environment variables your free to use the Spring Expression Language to invoke methods on the bean of type CfEnv to set properties for example plus more.

For more information read the full blog post as per below

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

Finally this Spring Boot application is an example of using this new library with an application deployed to the new Tanzu Application Service for Kubernetes.

https://github.com/papicella/spring-book-service


More Information

1. Introducing java-cfenv: A new library for accessing Cloud Foundry Services

https://spring.io/blog/2019/02/15/introducing-java-cfenv-a-new-library-for-accessing-cloud-foundry-services

2. Java CFEnv GitHub Repo

https://github.com/pivotal-cf/java-cfenv#pushing-your-application-to-cloud-foundry

Categories: Fusion Middleware

Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu

Thu, 2020-08-06 00:35
The following post shows how to configure a MySQL service into the new Tanzu Application Service BETA version 0.3.0. For instructions on how to install the Container Services Manager for VMware Tanzu (KSM) see post below.

http://www.clue2solve.io/tanzu/2020/07/14/install-ksm-and-configure-the-cf-marketplace.html
Steps
It's assumed you have already installed KSM into your Kubernetes Cluster as shown below. If not please refer to the documentation to get this done first


$ kubectl get all -n ksm
NAME READY STATUS RESTARTS AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-blvpp 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-mnshg 1/1 Running 0 15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv 1/1 Running 1 15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5 1/1 Running 1 15d
pod/ksm-postgresql-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ksm-chartmuseum ClusterIP 10.100.200.107 <none> 8080/TCP 15d
service/ksm-ksm-broker LoadBalancer 10.100.200.229 10.195.93.188 80:30086/TCP 15d
service/ksm-ksm-daemon LoadBalancer 10.100.200.222 10.195.93.179 80:31410/TCP 15d
service/ksm-postgresql ClusterIP 10.100.200.213 <none> 5432/TCP 15d
service/ksm-postgresql-headless ClusterIP None <none> 5432/TCP 15d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ksm-chartmuseum 1/1 1 1 15d
deployment.apps/ksm-ksm-broker 2/2 2 2 15d
deployment.apps/ksm-ksm-daemon 2/2 2 2 15d

NAME DESIRED CURRENT READY AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb 1 1 1 15d
replicaset.apps/ksm-ksm-broker-6db696894c 2 2 2 15d
replicaset.apps/ksm-ksm-broker-8645dfcf98 0 0 0 15d
replicaset.apps/ksm-ksm-daemon-587b6fd549 2 2 2 15d

NAME READY AGE
statefulset.apps/ksm-postgresql 1/1 15d

1. let's start by getting the Broker IP address which when installed using LoadBalancer type can be retrieved as shown below.

$ kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}'
10.195.93.188

2. Upgrade your Helm release by running the following using the IP address from above

$ export BROKER_IP=$(kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
$ helm upgrade ksm ./ksm -n ksm --reuse-values \
            --set cf.brokerUrl="http://$BROKER_IP" \
            --set cf.brokerName=KSM \
            --set cf.apiAddress="https://api.system.run.haas-210.pez.pivotal.io" \
            --set cf.username="admin" \
            --set cf.password="admin-password"

3. Next we configure the ksm CLI. You can download the CLI from here

configure-ksm-cli.sh

export KSM_IP=$(kubectl get service ksm-ksm-daemon -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
export KSM_TARGET=http://$KSM_IP:$(kubectl get svc ksm-ksm-daemon -n ksm -o=jsonpath='{@.spec.ports[0].port}')
export KSM_USER=admin
export KSM_PASSWORD=$(kubectl get secret -n ksm ksm-ksm-daemon -o=jsonpath='{@.data.SECURITY_USER_PASSWORD}' | base64 --decode)

4. Verify ksm CLI is configured correctly

$ ksm version
Client Version [0.10.80]
Server Version [0.10.80]

5. Create a YAML file for the KSM service account and ClusterRoleBinding using the following YAML:

ksm-sa.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksm-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ksm-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: ksm-admin
    namespace: kube-system

Apply as follows

$ kubectl apply -f ksm-sa.yml

6. You need a cluster credential file to register and set default Kubernetes clusters that is done as follows

cluster-creds.sh

export kube_config="/Users/papicella/.kube/config"

cluster=`grep current $kube_config|sed "s/ //g"|cut -d ":" -f 2`

echo "Using cluster $cluster"

export server=`grep -B 2 "name: $cluster" $kube_config \
  |grep server|sed "s/ //g"|sed "s/^[^:]*://g"`

export certificate=`grep -B 2 "name: $cluster" $kube_config \
  |grep certificate|sed "s/ //g"|sed "s/.*://"`

export secret_name=$(kubectl get serviceaccount ksm-admin \
   --namespace=kube-system -o jsonpath='{.secrets[0].name}')

export secret_val=$(kubectl --namespace=kube-system get secret $secret_name \
   -o jsonpath='{.data.token}')

export secret_val=$(echo ${secret_val} | base64 --decode)

cat > cluster-creds.yaml << EOF
token: ${secret_val}
server: ${server}
caData: ${certificate}
EOF

echo ""
echo "ready to roll!!!!"
echo ""

Before running this script it's best to make sure you have targeted the correct K8s cluster you wish to. You can run a command as follows to verify that

$ kubectl config current-context
tas4k8s
 
7. Now we have a "cluster-creds.yaml" file we can go ahead and register the Kubernetes cluster with KSM as follows

$ ksm cluster register ksm-svcs ./cluster-creds.yaml
$ ksm cluster set-default ksm-svcs

Verify as follows:

$ ksm cluster list
CLUSTER NAME IP ADDRESS                                      DEFAULT
ksm-svcs    https://tas4k8s.run.haas-210.pez.pivotal.io:8443 true

8. Now we can go ahead and create a Marketplace offering for MySQL. To do that we will use the Bitnami MySQL chart as shown below

$ git clone https://github.com/bitnami/charts.git
$ cd ./charts/bitnami/mysql

** create bind.yaml as follows which is required so our service binding from Tanzu Application Service will inject the right JSON we are expecting or requiring at bind time **

$ cat bind.yaml
template: |
  local filterfunc(j) = std.length(std.findSubstr("mysql", j.name)) > 0;
  local s1 = std.filter(filterfunc, $.services);
  {
    hostname: s1[0].status.loadBalancer.ingress[0].ip,
    name: s1[0].name,
    jdbcUrl: "jdbc:mysql://" + self.hostname + "/my_db?user=" + self.username + "&password=" + self.password + "&useSSL=false",
    uri: "mysql://" + self.username + ":" + self.password + "@" + self.hostname + ":" + self.port + "/my_db?reconnect=true",
    password: $.secrets[0].data['mysql-root-password'],
    port: 3306,
    username: "root"
  }

$ helm package .
# cd ..
$ ksm offer save ./mysql ./mysql/mysql-6.14.7.tgz

Verify MySQL is now part of the offer list as follows
  
$ ksm offer list
MARKETPLACE NAME INCLUDED CHARTS VERSION PLANS
rabbitmq rabbitmq 6.18.1 [persistent ephemeral]
mysql mysql 6.14.7 [default]

9. Now we need to login as an ADMIN user

Verify you are logged in as admin user using the CF CLI:

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           admin
org:            system
space:          development

10. At this point you can see the KSM service broker registered with TAS4K8s as follows

$ cf service-brokers
Getting service brokers as admin...

name   url
KSM    http://10.195.93.188

11. Enable access to the MySQL service as follows

$ cf enable-service-access mysql

Verify it's enabled:

$ cf service-access
Getting service access as admin...
broker: KSM
   service    plan         access   orgs
   mysql      default      all
   rabbitmq   ephemeral    all
   rabbitmq   persistent   all

12. At this point it's best to log out of admin and log back in as a user that is not admin

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           pas
org:            apples-org
space:          development

13. Create a MySQL service as follows. I passing in some JSON to indicate that my K8s cluster support's a LoadBalancer type so use that as part of the creation of the service.

$ cf create-service mysql default pas-mysql -c '{"service":{"type":"LoadBalancer"}}'

14. Check that the service has created correctly it will take a few minutes

$ cf services
Getting services in org apples-org / space development as pas...

name        service    plan        bound apps          last operation     broker   upgrade available
pas-mysql   mysql      default     my-springboot-app   create succeeded   KSM      no

15. Your service is created in it's own K8s namespace BUT that may not be the case at some point. 
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME READY STATUS RESTARTS AGE
pod/k-wqo5mubw-mysql-master-0 1/1 Running 0 15d
pod/k-wqo5mubw-mysql-slave-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k-wqo5mubw-mysql LoadBalancer 10.100.200.12 10.195.93.192 3306:30563/TCP 15d
service/k-wqo5mubw-mysql-slave LoadBalancer 10.100.200.130 10.195.93.191 3306:31982/TCP 15d

NAME READY AGE
statefulset.apps/k-wqo5mubw-mysql-master 1/1 15d
statefulset.apps/k-wqo5mubw-mysql-slave 1/1 15d

16. At this point we can now test our new MySQL service we created and use a Spring Boot application to test this out with. 

The following GitHub repo can be used for that. Ignore the steps to create a service as you have already done that




Finally to define service plans see the link below

More Information
Container Services Manager(KSM)

Tanzu Application Service for Kubernetes

Categories: Fusion Middleware

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Sun, 2020-08-02 22:45
Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
apples-kafka-cluster 3 3 1/1 Running 0 6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc 3/3 Running 0 6d21h
pod/apples-kafka-cluster-kafka-0 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-1 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-2 2/2 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-0 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-1 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-2 1/1 Running 0 6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d23h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apples-kafka-cluster-kafka-0 LoadBalancer 10.100.200.90 10.195.93.200 9094:30362/TCP 6d21h
service/apples-kafka-cluster-kafka-1 LoadBalancer 10.100.200.179 10.195.93.197 9094:32022/TCP 6d21h
service/apples-kafka-cluster-kafka-2 LoadBalancer 10.100.200.155 10.195.93.201 9094:32277/TCP 6d21h
service/apples-kafka-cluster-kafka-bootstrap ClusterIP 10.100.200.77 <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.100.200.58 10.195.93.196 9094:30735/TCP 6d21h
service/apples-kafka-cluster-zookeeper-client ClusterIP 10.100.200.22 <none> 2181/TCP 6d21h
service/apples-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apples-kafka-cluster-entity-operator 1/1 1 1 6d21h
deployment.apps/strimzi-cluster-operator 1/1 1 1 6d23h

NAME DESIRED CURRENT READY AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd 1 1 1 6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778 1 1 1 6d23h

NAME READY AGE
statefulset.apps/apples-kafka-cluster-kafka 3/3 6d21h
statefulset.apps/apples-kafka-cluster-zookeeper 3/3 6d21h 3 1/1 Running 0 6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME PARTITIONS REPLICATION FACTOR
apples-topic 1 1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

private KafkaTemplate<String, String> kafkaTemplate;

@Autowired
public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}

final private String topicName = "apples-topic";

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessageAddSuccess", "N");
return "home";
}

@PostMapping("/addentry")
public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

kafkaTemplate.send(topicName, message);

log.info("Sent single message: " + message);
model.addAttribute("message", message);
model.addAttribute("topicMessageAddSuccess", "Y");

return "home";
}
}

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

private static ArrayList<String> topicMessages = new ArrayList<String>();

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessages", topicMessages);
model.addAttribute("topicMessagesCount", topicMessages.size());

return "home";
}

@KafkaListener(topics = "apples-topic")
public void listen(String message) {
log.info("Received Message: " + message);
topicMessages.add(message);
}
}

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi
Categories: Fusion Middleware

Stumbled upon this today : Lens | The Kubernetes IDE

Thu, 2020-07-16 21:57
Lens is the only IDE you’ll ever need to take control of your Kubernetes clusters. It is a standalone application for MacOS, Windows and Linux operating systems. It is open source and free.

I installed it today and was impressed. Below is some screen shots of new Tanzu Application Service running on my Kubernetes cluster using Lens IDE. Simply point it to your Kube Config for the cluster you wish to examine.

On Mac SX it's installed as follows

$ brew cask install lens






More Information

https://github.com/lensapp/lens


Categories: Fusion Middleware

Spring Boot Data Elasticsearch using Elastic Cloud on Kubernetes (ECK) on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Mon, 2020-07-13 22:50
VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management.

In this post I show how to get Elastic Cloud on Kubernetes (ECK) up and running on VMware Tanzu Kubernetes Grid Integrated Edition and how to access it using a Spring Boot Application using Spring Data Elasticsearch.

With ECK, users now have a seamless way of deploying, managing, and operating the Elastic Stack on Kubernetes.

If you have a K8s cluster that's all you need to follow along.

Steps

1. Let's install ECK on our cluster we do that as follows

Note: There is a 1.1 version as the latest BUT I installing a slightly older one here

$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml

2. Make sure the operator is up and running as shown below
  
$ kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/elastic-operator-0 1/1 Running 0 26d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.100.200.55 <none> 443/TCP 26d

NAME READY AGE
statefulset.apps/elastic-operator 1/1 26d

3. We can also see a CRD for Elasticsearch as shown below.

elasticsearches.elasticsearch.k8s.elastic.co
  
$ kubectl get crd
NAME CREATED AT
apmservers.apm.k8s.elastic.co 2020-06-17T00:37:32Z
clusterlogsinks.pksapi.io 2020-06-16T23:04:43Z
clustermetricsinks.pksapi.io 2020-06-16T23:04:44Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-06-17T00:37:33Z
kibanas.kibana.k8s.elastic.co 2020-06-17T00:37:34Z
loadbalancers.vmware.com 2020-06-16T22:51:52Z
logsinks.pksapi.io 2020-06-16T23:04:43Z
metricsinks.pksapi.io 2020-06-16T23:04:44Z
nsxerrors.nsx.vmware.com 2020-06-16T22:51:52Z
nsxlbmonitors.vmware.com 2020-06-16T22:51:52Z
nsxlocks.nsx.vmware.com 2020-06-16T22:51:51Z

4. We are now ready to create our first Elasticsearch cluster. To do that create a file YML file as shown below

create-elastic-cluster-from-operator.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.7.0
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: default
    count: 2
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false

From the YML a few things to note:

  • We are creating two pods for our Elasticsearch cluster
  • We are using a K8s LoadBalancer to expose access to the cluster through HTTP
  • We are using version 7.7.0 but this is not the latest Elasticsearch version
  • We have disabled the use of TLS given this is just a demo
5. Apply that as shown below.

$ kubectl apply -f create-elastic-cluster-from-operator.yaml

6. After about a minute we should have our Elasticsearch cluster running. The following commands show that
  
$ kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.7.0 Ready 47h

$ kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/quickstart-es-default-0 1/1 Running 0 47h
pod/quickstart-es-default-1 1/1 Running 0 47h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 47h
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 47h

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 47h

7. Let's deploy a Kibana instance. To do that create a YML as shown below

create-kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: 7.7.0
  count: 1
  elasticsearchRef:
    name: quickstart
    namespace: default
  http:
    service:
      spec:
        type: LoadBalancer # default is ClusterIP

8. Apply that as shown below.

$ kubectl apply -f create-kibana.yaml

9. To verify everything is up and running we can run a command as follows
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/kibana-sample-kb-f8fcb88d5-jdzh5 1/1 Running 0 2d
pod/quickstart-es-default-0 1/1 Running 0 2d
pod/quickstart-es-default-1 1/1 Running 0 2d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kibana-sample-kb-http LoadBalancer 10.100.200.46 10.195.93.174 5601:32459/TCP 2d
service/kubernetes ClusterIP 10.100.200.1 <none> 443/TCP 27d
service/quickstart-es-default ClusterIP None <none> <none> 2d
service/quickstart-es-http LoadBalancer 10.100.200.92 10.195.93.137 9200:30590/TCP 2d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-sample-kb 1/1 1 1 2d

NAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-sample-kb-f8fcb88d5 1 1 1 2d

NAME READY AGE
statefulset.apps/quickstart-es-default 2/2 2d

10. So to access out cluster we will need to obtain the following which we can do using a script as follows. This was tested on Mac OSX

What do we need?

  • Elasticsearch password
  • IP address of the LoadBalancer service we created


access.sh

export PASSWORD=`kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'`
export IP=`kubectl get svc quickstart-es-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo $IP
echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200"

echo ""

curl -u "elastic:$PASSWORD" "http://$IP:9200/_cat/health?v"

Output:

10.195.93.137

{
  "name" : "quickstart-es-default-1",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "Bbpb7Pu7SmaQaCmEY2Er8g",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

.....

11. Ideally I would load some data into the Elasticsearch cluster BUT let's do that as part of a sample application using "Spring Data Elasticsearch". Clone the demo project as shown below.

$ git clone https://github.com/papicella/boot-elastic-demo.git
Cloning into 'boot-elastic-demo'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 36 (delta 1), reused 36 (delta 1), pack-reused 0
Unpacking objects: 100% (36/36), done.

12. Edit "./src/main/resources/application.yml" with your details for the Elasticsearch cluster above.

spring:
  elasticsearch:
    rest:
      username: elastic
      password: {PASSWORD}
      uris: http://{IP}:9200

13. Package as follows

$ ./mvnw -DskipTests package

14. Run as follows

$ ./mvnw spring-boot:run

....
2020-07-14 11:10:11.947  INFO 76260 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-07-14 11:10:11.954  INFO 76260 --- [           main] c.e.e.demo.BootElasticDemoApplication    : Started BootElasticDemoApplication in 2.495 seconds (JVM running for 2.778)
....

15. Access application using "http://localhost:8080/"




16. If we look at our code we will see the data was loaded into the Elasticsearch cluster using a java class called "LoadData.java". Ideally data should already exist in the cluster but for demo purposes we load some data as part of the Spring Boot Application and clear the data prior to each application run given it's just a demo.

2020-07-14 11:12:33.109  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OjThSnMBLjyTRl7lZsDL', make='holden', model='commodore', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:33.584  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='OzThSnMBLjyTRl7laMCo', make='holden', model='astra', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.189  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PDThSnMBLjyTRl7lasCC', make='nissan', model='skyline', bodystyles=[BodyStyle{type='4-door'}]}
2020-07-14 11:12:34.744  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PTThSnMBLjyTRl7lbMDe', make='nissan', model='pathfinder', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:35.227  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='PjThSnMBLjyTRl7lb8AL', make='ford', model='falcon', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:36.737  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QDThSnMBLjyTRl7lcMDu', make='ford', model='territory', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.266  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QTThSnMBLjyTRl7ldsDU', make='toyota', model='camry', bodystyles=[BodyStyle{type='4-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:37.777  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QjThSnMBLjyTRl7leMDk', make='toyota', model='corolla', bodystyles=[BodyStyle{type='2-door'}, BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.285  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='QzThSnMBLjyTRl7lesDj', make='kia', model='sorento', bodystyles=[BodyStyle{type='5-door'}]}
2020-07-14 11:12:38.800  INFO 76277 --- [           main] com.example.elastic.demo.LoadData        : Pre loading Car{id='RDThSnMBLjyTRl7lfMDg', make='kia', model='sportage', bodystyles=[BodyStyle{type='4-door'}]}

LoadData.java
  
package com.example.elastic.demo;

import com.example.elastic.demo.indices.BodyStyle;
import com.example.elastic.demo.indices.Car;
import com.example.elastic.demo.repo.CarRepository;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import lombok.extern.slf4j.Slf4j;

import static java.util.Arrays.asList;

@Configuration
@Slf4j
public class LoadData {
@Bean
public CommandLineRunner initElasticsearchData(CarRepository carRepository) {
return args -> {
carRepository.deleteAll();
log.info("Pre loading " + carRepository.save(new Car("holden", "commodore", asList(new BodyStyle("2-door"), new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("holden", "astra", asList(new BodyStyle("2-door"), new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "skyline", asList(new BodyStyle("4-door")))));
log.info("Pre loading " + carRepository.save(new Car("nissan", "pathfinder", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "falcon", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("ford", "territory", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "camry", asList(new BodyStyle("4-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("toyota", "corolla", asList(new BodyStyle("2-door"), new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sorento", asList(new BodyStyle("5-door")))));
log.info("Pre loading " + carRepository.save(new Car("kia", "sportage", asList(new BodyStyle("4-door")))));
};
}
}

17. Our CarRepository interface is defined as follows

CarRepository.java
  
package com.example.elastic.demo.repo;

import com.example.elastic.demo.indices.Car;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CarRepository extends ElasticsearchRepository <Car, String> {

Page<Car> findByMakeContaining(String make, Pageable page);

}

18. So let's also via this data using "curl" and Kibana as shown below.

curl -X GET -u "elastic:{PASSWORD}" "http://{IP}:9200/vehicle/_search?pretty" -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": [
    { "_id": "asc" }
  ]
}
'

Output:

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OjThSnMBLjyTRl7lZsDL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "commodore",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "OjThSnMBLjyTRl7lZsDL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "OzThSnMBLjyTRl7laMCo",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "holden",
          "model" : "astra",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "OzThSnMBLjyTRl7laMCo"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PDThSnMBLjyTRl7lasCC",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "skyline",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "PDThSnMBLjyTRl7lasCC"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PTThSnMBLjyTRl7lbMDe",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "nissan",
          "model" : "pathfinder",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PTThSnMBLjyTRl7lbMDe"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "PjThSnMBLjyTRl7lb8AL",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "falcon",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "PjThSnMBLjyTRl7lb8AL"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QDThSnMBLjyTRl7lcMDu",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "ford",
          "model" : "territory",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QDThSnMBLjyTRl7lcMDu"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QTThSnMBLjyTRl7ldsDU",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "camry",
          "bodystyles" : [
            {
              "type" : "4-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QTThSnMBLjyTRl7ldsDU"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QjThSnMBLjyTRl7leMDk",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "toyota",
          "model" : "corolla",
          "bodystyles" : [
            {
              "type" : "2-door"
            },
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QjThSnMBLjyTRl7leMDk"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "QzThSnMBLjyTRl7lesDj",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sorento",
          "bodystyles" : [
            {
              "type" : "5-door"
            }
          ]
        },
        "sort" : [
          "QzThSnMBLjyTRl7lesDj"
        ]
      },
      {
        "_index" : "vehicle",
        "_type" : "_doc",
        "_id" : "RDThSnMBLjyTRl7lfMDg",
        "_score" : null,
        "_source" : {
          "_class" : "com.example.elastic.demo.indices.Car",
          "make" : "kia",
          "model" : "sportage",
          "bodystyles" : [
            {
              "type" : "4-door"
            }
          ]
        },
        "sort" : [
          "RDThSnMBLjyTRl7lfMDg"
        ]
      }
    ]
  }
}

Kibana

Obtain Kibana HTTP IP as shown below and login using username "elastic" and password we obtained previously.

$ kubectl get svc kibana-sample-kb-http -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
10.195.93.174




Finally maybe you want to deploy the application to Kubernetes. To do that take a look at Cloud Native Buildpacks CNCF project and/or Tanzu Build Service to turn your code into a Container Image stored in a registry.



More Information

Spring Data Elasticsearch
https://spring.io/projects/spring-data-elasticsearch

VMware Tanzu Kubernetes Grid Integrated Edition Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid-Integrated-Edition/index.html
Categories: Fusion Middleware

Multi-Factor Authentication (MFA) using OKTA with Spring Boot and Tanzu Application Service

Thu, 2020-07-09 23:22
Recently I was asked to build a quick demo showing how to use MFA with OKTA and Spring Boot application running on Tanzu Application Service. Here is the demo application plus how to setup and run this yourself.

Steps

1. Clone the existing repo as shown below

$ git clone https://github.com/papicella/mfa-boot-fsi
Cloning into 'mfa-boot-fsi'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 2), reused 47 (delta 2), pack-reused 0
Unpacking objects: 100% (47/47), done.



2. Create a free account of https://developer.okta.com/

Once created login to the dev account. Your account URL will look like something as follows

https://dev-{ID}-admin.okta.com



3. You will need your default authorization server settings. From the top menu in the developer.okta.com dashboard, go to API -> Authorization Servers and click on the default server


You will need this data shortly. Image above is an example those details won't work for your own setup.

4. From the top menu, go to Applications and click the Add Application button. Click on the Web button and click Next. Name your app whatever you like. I named mine "pas-okta-springapp". Otherwise the default settings are fine. Click Done.

From this screen shot you can see that the default's refer to localhost which for DEV purposes is fine.


You will need the Client ID and Client secret from the final screen so make a note of these

5. Edit the "./mfa-boot-fsi/src/main/resources/application-DEV.yml" to include the details as per #3 and #4 above.

You will need to edit

  • issuer
  • client-id
  • client-secret


application-DEV.yaml

spring:
  security:
    oauth2:
      client:
        provider:
          okta:
            user-name-attribute: email

okta:
  oauth2:
    issuer: https://dev-213269.okta.com/oauth2/default
    redirect-uri: /authorization-code/callback
    scopes:
      - profile
      - email
      - openid
    client-id: ....
    client-secret: ....

6. In order to pick up this application-DEV.yaml we have to set the spring profile correctly. That can be done using a JVM property as follows.

-Dspring.profiles.active=DEV

In my example I use IntelliJ IDEA so I set it on the run configurations dialog as follows



7. Finally let's setup MFA which we do as follows by switching to classic UI as shown below



8. Click on Security -> Multifactor and setup another Multifactor policy. In the screen shot below I select "Email Policy" and make sure it is "Required" along with the default policy



9. Now run the application making sure you set the spring active profile to DEV.

...
2020-07-10 13:34:57.528  INFO 55990 --- [  restartedMain] pas.apa.apj.mfa.demo.DemoApplication     : The following profiles are active: DEV
...

10. Navigate to http://localhost:8080/



11. Click on the "Login" button

Verify you are taken to the default OKTA login page


12. Once logged in the second factor should then ask for a verification code to be sent to your email. Press the "Send me the code" button




13. Once you enter the code sent to your email you will be granted access to the application endpoints







14. Finally to deploy the application to Tanzu Application Service perform these steps below

- Create a manifest.yaml as follows

---
applications:
- name: pas-okta-boot-app 
  memory: 1024M
  buildpack: https://github.com/cloudfoundry/java-buildpack.git#v4.16
  instances: 2
  path: ./target/demo-0.0.1-SNAPSHOT.jar
  env:
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+}}'

- Package the application as follows

$ ./mvnw -DskipTests package

- In the DEV OTKA console create a second application which will be for the deployed application on Tanzu Application Service which refers to it's FQDN rather then localhost as shown below



- Edit "application.yml" to ensure you set the following correctly for the new "Application" we created above.

You will need to edit

  • issuer
  • client-id
  • client-secret
- Push the application using "cf push -f manifest.yaml"

$ cf apps
Getting apps in org papicella-org / space apple as papicella@pivotal.io...
OK

name                requested state   instances   memory   disk   urls
pas-okta-boot-app   started           1/1         1G       1G     pas-okta-boot-app.cfapps.io


That's It!!!!

Categories: Fusion Middleware

GitHub Actions to deploy Spring Boot application to Tanzu Application Service for Kubernetes

Wed, 2020-06-17 21:28
In this demo I show how to deploy a simple Spring boot application using GitHub Actions onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below.
  
$ kapp list
Target cluster 'https://35.189.13.31' (nodes: gke-tanzu-gke-lab-f67-np-f67b23a0f590-abbca04e-5sqc, 8+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
certmanager-cluster-issuer (cluster) true 8d
externaldns (cluster),external-dns true 8d
harbor-cert harbor true 8d
tas (cluster),cf-blobstore,cf-db,cf-system, false 8d
cf-workloads,cf-workloads-staging,istio-system,kpack,
metacontroller
tas4k8s-cert cf-system true 8d

Lcs: Last Change Successful
Lca: Last Change Age

5 apps

Succeeded

The demo exists on GitHub using the following URL, to follow along simply use your own GitHub repository making the changes as detailed below. The example below is for a Spring Boot application so your YAML file for the action would differ for non Java applications but there are many starter templates to choose from for other programming languages.

https://github.com/papicella/github-boot-demo



GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub

1. Create a folder at the root of your project source code as follows

$ mkdir ".github/workflows"

2. In ".github/workflows" folder, add a .yml or .yaml file for your workflow. For example, ".github/workflows/maven.yml"

3. Use the "Workflow syntax for GitHub Actions" reference documentation to choose events to trigger an action, add actions, and customize your workflow. In this example the YML "maven.yml" looks as follows.

maven.yml
  
name: Java CI with Maven and CD with CF CLI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up JDK 11.0.5
uses: actions/setup-java@v1
with:
java-version: 11.0.5
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: push to TAS4K8s
env:
CF_USERNAME: ${{ secrets.CF_USERNAME }}
CF_PASSWORD: ${{ secrets.CF_PASSWORD }}
run: |
curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
./cf api https://api.tas.lab.pasapples.me --skip-ssl-validation
./cf auth $CF_USERNAME $CF_PASSWORD
./cf target -o apples-org -s development
./cf push -f manifest.yaml

Few things here around the YML Workflow syntax for the GitHub Action above

  • We are using a maven action sample which will FIRE on a push or pull request on the master branch
  • We are using JDK 11 rather then Java 8
  • 3 Steps exists here
    • Setup JDK
    • Maven Build/Package
    • CF CLI Push to TAS4K8s using the built JAR artifact from the maven build
  • We download the CF CLI into ubuntu image 
  • We have masked the username and password using Secrets

4. Next in the project root add a manifest YAML for deployment to TAS4K8s

- Add a manifest.yaml file in the project root to deploy our simple Spring boot RESTful application

---
applications:
  - name: github-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

5. Now we need to add Secrets to the Github repo which are referenced in out "maven.yml" file. In our case they are as follows.
  • CF_USERNAME 
  • CF_PASSWORD
In your GitHub repository click on "Settings" tab then on left hand side navigation bar click on "Secrets" and define your username and password for your TAS4K8s instance as shown below



6. At this point that is all we need to test our GitHub Action. Here in IntelliJ IDEA I issue a commit/push to trigger the GitHub action



7. If all went well using "Actions" tab in your GitHub repo will show you the status and logs as follows






8. Finally our application will be deployed to TAS4K8s as shown below and we can invoke it using HTTPie or CURL for example
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
github-TAS4K8s-boot-demo started 1/1 1G 1G github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
my-springboot-app started 1/1 1G 1G my-springboot-app.apps.tas.lab.pasapples.me
test-node-app started 1/1 1G 1G test-node-app.apps.tas.lab.pasapples.me

$ cf app github-TAS4K8s-boot-demo
Showing health and status for app github-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: github-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
last uploaded: Thu 18 Jun 12:03:19 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-18T02:03:32Z 0.2% 136.5M of 1G 0 of 1G

$ http http://github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Thu, 18 Jun 2020 02:07:39 GMT
server: istio-envoy
x-envoy-upstream-service-time: 141

Thu Jun 18 02:07:39 GMT 2020



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitHub Actions
https://github.com/features/actions

GitHub Marketplace - Actions
https://github.com/marketplace?type=actions
Categories: Fusion Middleware

Deploying a Spring Boot application to Tanzu Application Service for Kubernetes using GitLab

Mon, 2020-06-15 20:44
In this demo I show how to deploy a simple Springboot application using GitLab pipeline onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below
  
$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
cf (cluster),build-service,cf-blobstore,cf-db, true 10d
cf-system,cf-workloads,cf-workloads-staging,
istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

Ensure you have GitLab running. In this example it's installed on a Kubernetes cluster but it doesn't have to be. All that matters here is that GitLab can access the API endpoint of your TAS4K8s install
  
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
gitlab gitlab 2 2020-05-15 13:22:15.470219 +1000 AEST deployed gitlab-3.3.4 12.10.5

1. First let's create a basic Springboot application with a simple RESTful endpoint as shown below. It's best to use the Spring Initializer to create this application. I simply used the web and lombok dependancies as shown below.

Note: Make sure you select java version 11.

Spring Initializer Web Interface


Using built in Spring Initializer in IntelliJ IDEA.


Here is my simple RESTful controller which simply output's todays date.
  
package com.example.demo;

import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Date;

@RestController
@Slf4j
public class FrontEnd {
@GetMapping("/")
public String index () {
log.info("An INFO Message");
return new Date().toString();
}
}

2. Create an empty project in GitLab using the name "gitlab-TAS4K8s-boot-demo"



3. At this point this add our project files from step #1 above into the empty GitLab project repository. We do that as follows.

$ cd "existing project folder from step #1"
$ git init
$ git remote add origin http://gitlab.ci.run.haas-236.pez.pivotal.io/root/gitlab-tas4k8s-boot-demo.git
$ git add .
$ git commit -m "Initial commit"
$ git push -u origin master

Once done we now have out GitLab project repository with the files we created as part of the project setup


4. It's always worth running the code locally just to make sure it's working so if you like you can do that as follows

RUN:

$ ./mvnw spring-boot:run

CURL:

$ curl http://localhost:8080/
Tue Jun 16 10:46:26 AEST 2020

HTTPie:

papicella@papicella:~$
papicella@papicella:~$
papicella@papicella:~$ http :8080/
HTTP/1.1 200
Connection: keep-alive
Content-Length: 29
Content-Type: text/plain;charset=UTF-8
Date: Tue, 16 Jun 2020 00:46:40 GMT
Keep-Alive: timeout=60

Tue Jun 16 10:46:40 AEST 2020

5. Our GitLab project as no pipelines defined so let's create one as follows in the project root directory using the default pipeline name ".gitlab-ci.yml"

image: openjdk:11-jdk

stages:
  - build
  - deploy

build:
  stage: build
  script: ./mvnw package
  artifacts:
    paths:
      - target/demo-0.0.1-SNAPSHOT.jar

production:
  stage: deploy
  script:
  - curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
  - ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
  - ./cf auth $CF_USERNAME $CF_PASSWORD
  - ./cf target -o apples-org -s development
  - ./cf push -f manifest.yaml
  only:
  - master


Note: We have not defined any tests in our pipeline which we should do but we haven't written any in this example.

6. For this pipeline to work we will need to do the following

- Add a manifest.yaml file in the project root to deploy our simple Springboot RESTful application

---
applications:
  - name: gitlab-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

- Alter the API endpoint to match your TAS4K8s endpoint

- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation

- Alter the target to use your ORG and SPACE within TAs4K8s.

- ./cf target -o apples-org -s development

This command shows you what your current CF CLI is targeted to so you can ensure you edit it with correct details
  
$ cf target
api endpoint: https://api.system.run.haas-236.pez.pivotal.io
api version: 2.150.0
user: pas
org: apples-org
space: development

7. For the ".gitlab-ci.yml" to work we need to define two ENV variables for our username and password. Those two are as follows which is our login credentials to TAS4K8s

  • CF_USERNAME 
  • CF_PASSWORD

To do that we need to navigate to "Project Settings -> CI/CD - Variables" and fill in the appropriate details as shown below



8. Now let's add the two new files using git , add a commit message and push the changes

$ git add .gitlab-ci.yml
$ git add manifest.yaml
git commit -m "add pipeline configuration"
$ git push -u origin master

9. Navigate to GitLab UI "CI/CD -> Pipelines" and we should see our pipeline starting to run








10. If everything went well!!!



11. Finally our application will be deployed to TAS4K8s as shown below
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
gitlab-TAS4K8s-boot-demo started 1/1 1G 1G gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
gitlab-tas4k8s-demo started 1/1 1G 1G gitlab-tas4k8s-demo.apps.system.run.haas-236.pez.pivotal.io
test-node-app started 1/1 1G 1G test-node-app.apps.system.run.haas-236.pez.pivotal.io

$ cf app gitlab-TAS4K8s-boot-demo
Showing health and status for app gitlab-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: gitlab-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
last uploaded: Tue 16 Jun 11:29:03 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-16T01:29:16Z 0.1% 118.2M of 1G 0 of 1G

12. Access it as follows.

$ http http://gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Tue, 16 Jun 2020 01:35:28 GMT
server: istio-envoy
x-envoy-upstream-service-time: 198

Tue Jun 16 01:35:28 GMT 2020

Of course if you wanted to create an API like service you could use the source code at this repo rather then the simple demo shown here using OpenAPI.

https://github.com/papicella/spring-book-service



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitLab
https://about.gitlab.com/
Categories: Fusion Middleware

Installing a UI for Tanzu Application Service for Kubernetes

Thu, 2020-06-04 23:18
Having installed Tanzu Application Service for Kubernetes a few times having a UI is something I must have. In this post I show how to get Stratos deployed and running on Tanzu Application Service for Kubernetes (TAS4K8s) beta 0.2.0.

Steps

Note: It's assumed you have TAS4K8s deployed and running as per the output of "kapp" 

$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  2h
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

1. First let's create a namespace to install Stratos into.

$ kubectl create namespace console
namespace/console created

2. Using helm 3 install Stratos as shown below.

$ helm install my-console --namespace=console stratos/console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Fri Jun  5 13:18:22 2020
NAMESPACE: console
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. You can verify it installed correctly a few ways as shown below

- Check using "helm ls -A"
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-console console 1 2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
- Check everything in the namespace "console" is up and running
$ kubectl get all -n console
NAME READY STATUS RESTARTS AGE
pod/stratos-0 2/2 Running 0 34m
pod/stratos-config-init-1-mxqbw 0/1 Completed 0 34m
pod/stratos-db-7fc9b7b6b7-sp4lf 1/1 Running 0 34m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-console-mariadb ClusterIP 10.100.200.65 <none> 3306/TCP 34m
service/my-console-ui-ext LoadBalancer 10.100.200.216 10.195.75.164 443:32286/TCP 34m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/stratos-db 1/1 1 1 34m

NAME DESIRED CURRENT READY AGE
replicaset.apps/stratos-db-7fc9b7b6b7 1 1 1 34m

NAME READY AGE
statefulset.apps/stratos 1/1 34m

NAME COMPLETIONS DURATION AGE
job.batch/stratos-config-init-1 1/1 28s 34m
4. To invoke the UI run a script as follows.

Script:

export IP=`kubectl -n console get service my-console-ui-ext -ojsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo "Stratos URL: https://$IP:443"
echo ""

Output:

$ ./get-stratos-url.sh

Stratos URL: https://10.195.75.164:443

5. Invoking the URL above will take you to a screen as follows where you would select "Local Admin" account



6. Set a password and click "Finish" button


7. At this point we need to get an API endpoint for our TAS4K8s install. Easiest way to get that is to run a command as follows when logged in using the CF CLI as follows

$ cf api
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0

8. Click on the "Register an Endpoint" + button as shown below


9. Select "Cloud Foundry" as the type you wish to register.

10. Enter details as shown below and click on "Register" button.
 


11. At this point you should connect to Cloud Foundry using your admin credentials for the TAS4K8s instance as shown below.


12. Once connected your good to go and start deploying some applications. 




Categories: Fusion Middleware

Targeting specific namespaces with kubectl

Mon, 2020-06-01 00:45
Note for myself given kubectl does not allow multiple namespaces as per it's CLI

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'

OR (get all) if you want to see all resources

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get all;'
  
$ eval 'kubectl --namespace='{cf-system,kpack,istio-system}' get pod;'
NAME READY STATUS RESTARTS AGE
ccdb-migrate-995n7 0/2 Completed 1 3d23h
cf-api-clock-7595b76c78-94trp 2/2 Running 2 3d23h
cf-api-deployment-updater-758f646489-k5498 2/2 Running 2 3d23h
cf-api-kpack-watcher-6fb8f7b4bf-xh2mg 2/2 Running 0 3d23h
cf-api-server-5dc58fb9d-8d2nc 5/5 Running 5 3d23h
cf-api-server-5dc58fb9d-ghwkn 5/5 Running 4 3d23h
cf-api-worker-7fffdbcdc7-fqpnc 2/2 Running 2 3d23h
cfroutesync-75dff99567-kc8qt 2/2 Running 0 3d23h
eirini-5cddc6d89b-57dgc 2/2 Running 0 3d23h
fluentd-4fsp8 2/2 Running 2 3d23h
fluentd-5vfnv 2/2 Running 1 3d23h
fluentd-gq2kr 2/2 Running 2 3d23h
fluentd-hnjgm 2/2 Running 2 3d23h
fluentd-j6d5n 2/2 Running 1 3d23h
fluentd-wbzcj 2/2 Running 2 3d23h
log-cache-7fd48cd767-fj9k8 5/5 Running 5 3d23h
metric-proxy-695797b958-j7tns 2/2 Running 0 3d23h
uaa-67bd4bfb7d-v72v6 2/2 Running 2 3d23h
NAME READY STATUS RESTARTS AGE
kpack-controller-595b8c5fd-x4kgf 1/1 Running 0 3d23h
kpack-webhook-6fdffdf676-g8v9q 1/1 Running 0 3d23h
NAME READY STATUS RESTARTS AGE
istio-citadel-589c85d7dc-677fz 1/1 Running 0 3d23h
istio-galley-6c7b88477-fk9km 2/2 Running 0 3d23h
istio-ingressgateway-25g8s 2/2 Running 0 3d23h
istio-ingressgateway-49txj 2/2 Running 0 3d23h
istio-ingressgateway-9qsqj 2/2 Running 0 3d23h
istio-ingressgateway-dlbcr 2/2 Running 0 3d23h
istio-ingressgateway-jdn42 2/2 Running 0 3d23h
istio-ingressgateway-jnx2m 2/2 Running 0 3d23h
istio-pilot-767fc6d466-8bzt8 2/2 Running 0 3d23h
istio-policy-66f4f99b44-qhw92 2/2 Running 1 3d23h
istio-sidecar-injector-6985796b87-2hvxw 1/1 Running 0 3d23h
istio-telemetry-d6599c76f-ps6xd 2/2 Running 1 3d23h
Categories: Fusion Middleware

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Thu, 2020-05-07 05:10
Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/
Categories: Fusion Middleware

Creating my first Tanzu Kubernetes Grid 1.0 workload cluster on AWS

Tue, 2020-05-05 04:15
With Tanzu Kubernetes Grid you can run the same K8s across data center, public cloud and edge for a consistent, secure experience for all development teams. To find out more here is step by step to get this working on AWS which is one of the first 2 supported IaaS, the other being vSphere.

Steps

Before we get started we need to download a few bits and pieces all described here.

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-set-up-tkg.html

Once you done that make sure you have tkg cli as follows

$ tkg version
Client:
Version: v1.0.0
Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

You will also need the following
  • kubectl is installed.
  • Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux.
  • Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS.
  • System time is synchronized with a Network Time Protocol (NTP) server
Once that is done follow this link for AWS pre-reqs and other downloads required

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws.html

1. Start by setting some AWS env variables for your account. Ensure you select a region supported by TKG which in my case I am using US regions

export AWS_ACCESS_KEY_ID=YYYY
export AWS_SECRET_ACCESS_KEY=ZZZZ
export AWS_REGION=us-east-1

2. Run the following clusterawsadm command to create a CloudFoundation stack.

$ ./clusterawsadm alpha bootstrap create-stack
Attempting to create CloudFormation stack cluster-api-provider-aws-sigs-k8s-io

Following resources are in the stack:

Resource                  |Type                                                                                |Status
AWS::IAM::Group           |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/nodes.cluster-api-provider-aws.sigs.k8s.io         |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/controllers.cluster-api-provider-aws.sigs.k8s.io   |CREATE_COMPLETE
AWS::IAM::Role            |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::Role            |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::Role            |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::User            |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE

On AWS console you should see the stack created as follows


3. Ensure SSH key pair exists in your region as shown below

$ aws ec2 describe-key-pairs --key-name us-east-key
{
    "KeyPairs": [
        {
            "KeyFingerprint": "71:44:e3:f9:0e:93:1f:e7:1e:c4:ba:58:e8:65:92:3e:dc:e6:27:42",
            "KeyName": "us-east-key"
        }
    ]
}

4. Set Your AWS Credentials as Environment Variables for Use by Cluster API

$ export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

$ export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

$ export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

$ export AWS_B64ENCODED_CREDENTIALS=$(./clusterawsadm alpha bootstrap encode-aws-credentials)

5. Set the correct AMI for your region.

List here: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/rn/VMware-Tanzu-Kubernetes-Grid-10-Release-Notes.html#amis

$ export AWS_AMI_ID=ami-0cdd7837e1fdd81f8

6. Deploy the Management Cluster to Amazon EC2 with the Installer Interface

$ tkg init --ui

Following the docs link below to fill in the desired details most of the defaults should work

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws-ui.html

Once complete:

$ ./tkg init --ui
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T091728980865562.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider aws:v0.5.2
Generating cluster configuration...
Setting up bootstrapper...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Start creating management cluster...
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster pasaws-tkg-man-cluster as 'pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]


In AWS console EC2 instances page you will see a few VM's that represent the management cluster as shown below


7. Show the management cluster as follows

$ tkg get management-cluster
+--------------------------+-----------------------------------------------------+
| MANAGEMENT CLUSTER NAME  | CONTEXT NAME                                        |
+--------------------------+-----------------------------------------------------+
| pasaws-tkg-man-cluster * | pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster |
+--------------------------+-----------------------------------------------------+

8. You

9. You can connect to the management cluster as follows to look at what is running

$ kubectl config use-context pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster
Switched to context "pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster".

10. Deploy a Dev cluster with Multiple Worker Nodes as shown below. This should take about 10 minutes or so.

$ tkg create cluster apples-aws-tkg --plan=dev --worker-machine-count 2
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T101702293042678.log
Creating workload cluster 'apples-aws-tkg'...

Context set for workload cluster apples-aws-tkg as apples-aws-tkg-admin@apples-aws-tkg

Waiting for cluster nodes to be available...

Workload cluster 'apples-aws-tkg' created

In AWS console EC2 instances page you will see a few more VM's that represent our new TKG workload cluster


11. View what workload clusters are under management and have been created

$ tkg get clusters
+----------------+-------------+
| NAME           | STATUS      |
+----------------+-------------+
| apples-aws-tkg | Provisioned |
+----------------+-------------+

12. To connect to the workload cluster we just created use a set of commands as follows

$ tkg get credentials apples-aws-tkg
Credentials of workload cluster apples-aws-tkg have been saved
You can now access the cluster by switching the context to apples-aws-tkg-admin@apples-aws-tkg under /Users/papicella/.kube/config

$ kubectl config use-context apples-aws-tkg-admin@apples-aws-tkg
Switched to context "apples-aws-tkg-admin@apples-aws-tkg".

$ kubectl cluster-info
Kubernetes master is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443
KubeDNS is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The following link will also be helpful
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-connect.html

18. View your cluster nodes as shown below
  
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-12.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2
ip-10-0-0-143.ec2.internal Ready master 6h25m v1.17.3+vmware.2
ip-10-0-0-63.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2

Now your ready to deploy workloads into your TKG workload cluster and or create as many clusters as you need. For more information use the links below.


More Information

VMware Tanzu Kubernetes Grid
https://tanzu.vmware.com/kubernetes-grid

VMware Tanzu Kubernetes Grid 1.0 Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html


Categories: Fusion Middleware

Running Oracle 18c on a vSphere 7 using a Tanzu Kubernetes Grid Cluster

Sun, 2020-05-03 20:53
Previously I blogged about how to run stateful MySQL pod on vSphere 7 with Kubernetes. In this blog post we will do the same with Oracle Database Single Instance.

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes
http://theblasfrompas.blogspot.com/2020/04/creating-single-instance-stateful-mysql.html

For this blog we will use an Oracle single instance database version as follows [Oracle Database 18c (18.4.0) Express Edition (XE)], but could use any of the following if we wanted to. For a demo Oracle XE is all I need.
  • Oracle Database 19c (19.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 18c (18.4.0) Express Edition (XE)
  • Oracle Database 18c (18.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 2 (12.2.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 1 (12.1.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 11g Release 2 (11.2.0.2) Express Edition (XE)
Steps

1. First head to the following GitHub URL which contains sample Docker build files to facilitate installation, configuration, and environment setup for DevOps users. Clone it as shown below

$ git clone https://github.com/oracle/docker-images.git

2. Change to the directory as follows.

$ cd oracle/docker-images/OracleDatabase/SingleInstance/dockerfiles

3. Now ensure you have a local Docker Daemon running in my case I using Docker Desktop for Mac OSX. With that running let's build our docker image locally as shown below for the database [Oracle Database 18c (18.4.0) Express Edition (XE)]

$ ./buildDockerImage.sh -v 18.4.0 -x

....

.

  Oracle Database Docker Image for 'xe' version 18.4.0 is ready to be extended:

    --> oracle/database:18.4.0-xe

  Build completed in 1421 seconds.

4. View the image locally using "docker images"
  
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/database 18.4.0-xe 3ec5d050b739 5 minutes ago 5.86GB
oraclelinux 7-slim f23503228fa1 2 weeks ago 120MB

5. Not really interested in running Oracle locally so let's push the built image to a Container Registry. In this case I am using Dockerhub

$ docker tag oracle/database:18.4.0-xe pasapples/oracle18.4.0-xe
$ docker push pasapples/oracle18.4.0-xe
The push refers to repository [docker.io/pasapples/oracle18.4.0-xe]
5bf989482a54: Pushed
899f9c386f90: Pushed
bc198e3a2f79: Mounted from library/oraclelinux
latest: digest: sha256:0dbbb906b20e8b052a5d11744a25e75edff07231980b7e110f45387e4956600a size: 951

Once done here in the image on Dockerhub



6. At this point we ready to deploy our Oracle Database 18c (18.4.0) Express Edition (XE). To do that we will use a Tanzu Kubernetes Grid cluster on vSphere 7. For an example of how that was created visit this blog post below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

We will be using a cluster called "tkg-cluster-1" as shown in vSphere client image below.


7. Ensure we have switched to the correct context here as shown below.

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

8. Now let's create a PVC for our Oracle database. Ensure you use a storage class name you have previously setup in my case thats "pacific-gold-storage-policy". You don't really need 80G for a demo with Oracle XE but given I had 2TB of storage I set it to be quite high.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: oracle-pv-claim
  annotations:
    pv.beta.kubernetes.io/gid: "54321"
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi

$ kubectl create -f oracle-pvc.yaml
persistentvolumeclaim/oracle-pv-claim created

$ kubectl describe pvc oracle-pv-claim
Name:          oracle-pv-claim
Namespace:     default
StorageClass:  pacific-gold-storage-policy
Status:        Bound
Volume:        pvc-385ee541-5f7b-4a10-95de-f8b35a24306f
Labels:       
Annotations:   pv.beta.kubernetes.io/gid: 54321
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      80Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:   
Events:
  Type    Reason                Age   From                                                                                                 Message
  ----    ------                ----  ----                                                                                                 -------
  Normal  ExternalProvisioning  49s   persistentvolume-controller                                                                          waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
  Normal  Provisioning          49s   csi.vsphere.vmware.com_vsphere-csi-controller-8446748d4d-qbjhn_acc32eab-845a-11ea-a597-baf3d8b74e48  External provisioner is provisioning volume for claim "default/oracle-pv-claim"

9. Now we are ready to create a Deployment YAML as shown below. Few things to note here as per the YAML below
  1. I am hard coding the password but normally I would use a k8s Secret to do this
  2. I needed to create a init-container which fixed a file system permission issue for me 
  3. I am running as the root user as per "runAsUser: 0" again for some reason the installation would not start if it didn't have root privileges
  4. I am using the PVC we created above "oracle-pv-claim"
  5. I want to expose port 1521 (database listener port) and 5500 (enterprise manager port) internally only for now as per the Service definition. 
Deployment YAML:


apiVersion: v1
kind: Service
metadata:
  name: oracle
spec:
  ports:
  - port: 1521
    name: dblistport
  - port: 5500
    name: emport
  selector:
    app: oracle
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: oracle
spec:
  selector:
    matchLabels:
      app: oracle
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: oracle
    spec:
      containers:
      - image: pasapples/oracle18.4.0-xe
        name: oracle
        env:
          # Use secret in real usage
        - name: ORACLE_PWD
          value: welcome1
        - name: ORACLE_CHARACTERSET
          value: AL32UTF8
        ports:
        - containerPort: 1521
          name: dblistport
        - containerPort: 5500
          name: emport
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
        securityContext:
          runAsUser: 0
          runAsGroup: 54321
      initContainers:
      - name: fix-volume-permission
        image: busybox
        command:
        - sh
        - -c
        - chown -R 54321:54321 /opt/oracle/oradata && chmod 777 /opt/oracle/oradata
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
      volumes:
      - name: oracle-persistent-storage
        persistentVolumeClaim:
          claimName: oracle-pv-claim

10. Apply the YAML as shown below

$ kubectl create -f oracle-deployment.yaml
service/oracle created
deployment.apps/oracle created

11. Wait for the oracle pod to be in a running state as shown below, this should happen rarely quickly
  
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-574b87c764-2zrp2 1/1 Running 0 11d
nginx-574b87c764-p8d45 1/1 Running 0 11d
oracle-77f6f7d567-sfd67 1/1 Running 0 36s

12. You can now monitor the pod as it starts to create the database instance for us using the "kubectl logs" command as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
....

13. This will take some time but eventually it will have created / started the database instance for us as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: oracle-77f6f7d567-sfd67/XEPDB1
     Multitenant container database: oracle-77f6f7d567-sfd67
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
Pluggable database XEPDB1 opened read write
Completed: alter pluggable database XEPDB1 open
2020-05-04T00:59:32.719571+00:00
XEPDB1(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
XEPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
2020-05-04T00:59:37.043341+00:00
ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE

14. The easiest way to test out our database instance is to "exec" into the pod and use SQLPlus as shown below

- Create a script as follows

export POD_NAME=`kubectl get pod -l app=oracle -o jsonpath="{.items[0].metadata.name}"`
kubectl exec -it $POD_NAME -- /bin/bash

- Execute the script to exec into the pod

$ ./exec-oracle-pod.sh
bash-4.2#

15. Now lets connect one of two ways given we have a Pluggable database instance also running
  
bash-4.2# sqlplus system/welcome1@XE

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:02:38 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.


Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL> exit
Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
bash-4.2# sqlplus system/welcome1@XEPDB1

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:03:20 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 01:02:38 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>

18. Now let's  connect externally to the database which to do I could create a port forward of the Oracle database listener port as shown below. I have setup Oracle instant client using the URL as follows https://www.oracle.com/database/technologies/instant-client/macos-intel-x86-downloads.html

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 1521
Forwarding from 127.0.0.1:1521 -> 1521
Forwarding from [::1]:1521 -> 1521

Now login using SQLPlus directly from my Mac OSX terminal window

  
$ sqlplus system/welcome1@//localhost:1521/XEPDB1

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 4 11:43:05 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 11:39:46 +10:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>


17. We could also use Oracle Enterprise Manager which we would do as follows. We could create a k8s Service of type LoadBalancer as well but for now let's just do a simple port forward as per above

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 5500
Forwarding from 127.0.0.1:5500 -> 5500
Forwarding from [::1]:5500 -> 5500

18. Access Oracle Enterprise Manager as follows, ensuring you have Flash installed in your browser. I logged in using the "SYS" user as "SYSDBA"

https://localhost:5500/em

Once logged in:






And that's it you have Oracle 18c / Oracle Enterprise manager running on vSphere 7 with Kubernetes and can now start deploying some applications that use that Oracle instance as required.


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html


Categories: Fusion Middleware

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes

Mon, 2020-04-27 20:32
In the vSphere environment, the persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. After the vSphere administrator creates a storage policy, for example gold, and assigns it to a namespace in a Supervisor Cluster, the storage policy appears as a matching Kubernetes storage class in the Supervisor Namespace and any available Tanzu Kubernetes clusters.

In this example below we will show how to get a Single instance Stateful MySQL application pod on vSphere 7 with Kubernetes. For an introduction to vSphere 7 with Kubernetes see this blog link below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

Steps 

1. If you followed the Blog above you will have a Namespace as shown in the image below. The namespace we are using is called "ns1"



2. Click on "ns1" and ensure you have added storage using the "Storage" card



3. Now let's connect to our supervisor cluster and switch to the Namespace "ns1"

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

4. At this point we need to switch to the Namespace we created at step 2 which is "ns1".

$ kubectl config use-context ns1
Switched to context "ns1".

5. Use one of the following commands to verify that the storage class is the one which we added to the Namespace as per #2 above, in this case "pacific-gold-storage-policy".
  
$ kubectl get storageclass
NAME PROVISIONER AGE
pacific-gold-storage-policy csi.vsphere.vmware.com 5d20h

$ kubectl describe namespace ns1
Name: ns1
Labels: vSphereClusterID=domain-c8
Annotations: ncp/extpoolid: domain-c8:1d3e6bfb-af68-4494-a9bf-c8560a7a6aef-ippool-10-193-191-129-10-193-191-190
ncp/snat_ip: 10.193.191.141
ncp/subnet-0: 10.244.0.240/28
ncp/subnet-1: 10.244.1.16/28
vmware-system-resource-pool: resgroup-67
vmware-system-vm-folder: group-v68
Status: Active

Resource Quotas
Name: ns1-storagequota
Resource Used Hard
-------- --- ---
pacific-gold-storage-policy.storageclass.storage.k8s.io/requests.storage 20Gi 9223372036854775807

No resource limits.

As a DevOps engineer, you can use the storage class in your persistent volume claim specifications. You can then deploy an application that uses storage from the persistent volume claim.

6. At this point we can create a Persistent Volume Claim using YAML as follows. In the example below we reference storage class name ""pacific-gold-storage-policy".

Note: We are using a Supervisor Cluster Namespace here for our Stateful MySQL application but the storage class name will also appear in any Tanzu Kubernetes clusters you have created.

Example:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7. Let's view the PVC we just created
  
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-a60f2787-ccf4-4142-8bf5-14082ae33403 20Gi RWO pacific-gold-storage-policy 39s

8. Now let's create a Deployment that will mount this PVC we created above using the name "mysql-pv-claim"

Example:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

$ kubectl apply -f mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created

9. Let's verify we have a running Deployment with a MySQL POD as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-c85f7f79c-gskkr 1/1 Running 0 78s
pod/nginx 1/1 Running 0 3d21h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql ClusterIP None <none> 3306/TCP 79s
service/tkg-cluster-1-60657ac113b7b5a0ebaab LoadBalancer 10.96.0.253 10.193.191.68 80:32078/TCP 5d19h
service/tkg-cluster-1-control-plane-service LoadBalancer 10.96.0.222 10.193.191.66 6443:30659/TCP 5d19h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 79s

NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-c85f7f79c 1 1 1 79s

10. If we return to vSphere client we will see our MySQL Stateful deployment as shown below


11. We can also view the PVC we have created in vSphere client as well



12. Finally let's connect to the MySQL database which is done as follows by

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.47 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| performance_schema  |
+---------------------+
4 rows in set (0.02 sec)

mysql>


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html

Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52.html#GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52
Categories: Fusion Middleware

A first look a running a Kubenetes cluster on "vSphere 7 with Kubernetes"

Wed, 2020-04-22 19:40
VMware recently announced the general availability of vSphere 7. Among many new features is the integration of Kubernetes into vSphere. In this blog post we will see what is required to create our first Kubernetes Guest cluster and deploy the simplest of workloads.



Steps

1. Log into the vCenter client and select "Menu -> Workload Management" and click on "Enable"

Full details on how to enable and setup the Supervisor Cluster can be found at the following docs

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html

Make sure you enable Harbor as the Registry using this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html

A pre-requisite for Workload Management is to have NSX-T 3.0 installed / enabled. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html

Once all done the "Workload Management" page will look like this. This can take around 30 minutes to complete



2. As a vSphere administrator, you can create namespaces on a Supervisor Cluster and configure them with resource quotas, storage, as well as set permissions for DevOps engineer users. Once you configure a namespace, you can provide it DevOps engineers, who run vSphere Pods and Kubernetes clusters created through the VMware Tanzu™ Kubernetes Grid™ Service.

To do this follow this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-1544C9FE-0B23-434E-B823-C59EFC2F7309.html

Note: Make a note of this Namespace as we are going to need to connect to it shortly. In the examples below we have a namespace called "ns1"

3. With a vSphere namespace created we can now download the required CLI

Note: You can get the files from the Namespace summary page as shown below under the heading "Link to CLI Tools"



One downloaded put the contents of the .zip file in your OS's executable search path

4. Now we are ready to login. To do that we will use a command as follows

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-253.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

Full instructions are at the following URL

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-F5114388-1838-4B3B-8A8D-4AE17F33526A.html

5. At this point we need to switch to the Namespace we created at step 2 which is "ns1"

$ kubectl config use-context ns1
Switched to context "ns1".

6. Get a list of the available content images and the Kubernetes version that the image provides

Command: kubectl get virtualmachineimages
  
$ kubectl get virtualmachineimages
NAME AGE
ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd 35m

Version Information can be retrieved as follows:
  
$ kubectl describe virtualmachineimage ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Name: ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Namespace:
Labels: <none>
Annotations: vmware-system.compatibilityoffering:
[{"requires": {"k8s.io/configmap": [{"predicate": {"operation": "anyOf", "arguments": [{"operation": "not", "arguments": [{"operation": "i...
vmware-system.guest.kubernetes.addons.calico:
{"type": "inline", "value": "---\n# Source: calico/templates/calico-config.yaml\n# This ConfigMap is used to configure a self-hosted Calic...
vmware-system.guest.kubernetes.addons.pvcsi:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ .PVCSINamespace }}\n---\nkind: ServiceAccount\napiVers...
vmware-system.guest.kubernetes.addons.vmware-guest-cluster:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: vmware-system-cloud-provider\n---\napiVersion: v1\nkind: ...
vmware-system.guest.kubernetes.distribution.image.version:
{"kubernetes": {"version": "1.16.8+vmware.1", "imageRepository": "vmware.io"}, "compatibility-7.0.0.10100": {"isCompatible": "true"}, "dis...
API Version: vmoperator.vmware.com/v1alpha1
Kind: VirtualMachineImage
Metadata:
Creation Timestamp: 2020-04-22T04:52:42Z
Generation: 1
Resource Version: 28324
Self Link: /apis/vmoperator.vmware.com/v1alpha1/virtualmachineimages/ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
UID: 9b2a8248-d315-4b50-806f-f135459801a8
Spec:
Image Source Type: Content Library
Type: ovf
Events: <none>


7. Create a YAML file with the required configuration parameters to define the cluster

Few things to note:
  1. Make sure your storageClass name matches the storage class name you used during setup
  2. Make sure your distribution version matches a name from the output of step 6
Example:

apiVersion: run.tanzu.vmware.com/v1alpha1               #TKG API endpoint
kind: TanzuKubernetesCluster                            #required parameter
metadata:
  name: tkg-cluster-1                                   #cluster name, user defined
  namespace: ns1                                        #supervisor namespace
spec:
  distribution:
    version: v1.16                                      #resolved kubernetes version
  topology:
    controlPlane:
      count: 1                                          #number of control plane nodes
      class: best-effort-small                          #vmclass for control plane nodes
      storageClass: pacific-gold-storage-policy         #storageclass for control plane
    workers:
      count: 3                                          #number of worker nodes
      class: best-effort-small                          #vmclass for worker nodes
      storageClass: pacific-gold-storage-policy         #storageclass for worker nodes

More information on what the goes into your YAML is defined here

Configuration Parameters for Provisioning Tanzu Kubernetes Clusters
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html

8. Provision the Tanzu Kubernetes cluster using the following kubectl command against the manifest file above

Command: kubectl apply -f CLUSTER-NAME.yaml

While creating you can check the status as follows

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 15m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c provisioning

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 14m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 6m4s

9. Run the following command and make sure the Tanzu Kubernetes cluster is running, this may take some time.

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 18m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp vsphere://420ca6ec-9793-7f23-2cd9-67b46c4cc49d provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm vsphere://420c9dd0-4fee-deb1-5673-dabc52b822ca provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c vsphere://420cf11f-24e4-83dd-be10-7c87e5486f1c provisioned

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 18m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 9m59s

10. For a more concise view of what Tanzu Kubernetes Cluster you have this command with it's status is useful enough

Command: kubectl get tanzukubernetescluster
  
$ kubectl get tanzukubernetescluster
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 20m running

11. Now let's login to a Tanzu Kubernetes Cluster using it's name as follows

kubectl vsphere login --tanzu-kubernetes-cluster-name TKG-CLUSTER-NAME --vsphere-username VCENTER-SSO-USER --server SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS --insecure-skip-tls-verify

Example:

$ kubectl vsphere login --tanzu-kubernetes-cluster-name tkg-cluster-1 --vsphere-username administrator@vsphere.local --server wcp.haas-yyy.pez.pivotal.io --insecure-skip-tls-verify

Password:

Logged in successfully.

You have access to the following contexts:
   ns1
   tkg-cluster-1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

12. Let's switch to the correct context here which is our newly created Kubernetes cluster

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

13. If your applications fail to run with the error “container has runAsNonRoot and the image will run as root”, add the RBAC cluster roles from here:

https://github.com/dstamen/Kubernetes/blob/master/demo-applications/allow-runasnonroot-clusterrole.yaml

PSP (Pod Security Policy) is enabled by default in the Tanzu Kubernetes Clusters so a PSP policy needs to be applied prior to dropping a deployment on the cluster as shown above in the link

14. Now lets deploy a simple nginx deployment using the YAML file

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginx
  name: nginx
spec:
  ports:
    - port: 80
  selector:
    app: nginx
  type: LoadBalancer

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

15. Apply the YAML config to create the Deployment

$ kubectl create -f nginx-deployment.yaml
service/nginx created
deployment.apps/nginx created

16. Verify everything was deployed successfully as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-574b87c764-2zrp2 1/1 Running 0 74s
pod/nginx-574b87c764-p8d45 1/1 Running 0 74s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
service/nginx LoadBalancer 10.111.0.106 10.193.191.68 80:31921/TCP 75s
service/supervisor ClusterIP None <none> 6443/TCP 29m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 75s

NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-574b87c764 2 2 2 75s

To access NGINX use the the external IP address of the service "service/nginx" on port 80



17. Finally lets return to vSphere client and see where our Tanzu Kubernetes Cluster we created exists. It will be inside the vSphere namespace "ns1" which os where we drove our install of the Tanzu Kubernetes Cluster from.





More Information

Introducing vSphere 7: Modern Applications & Kubernetes
https://blogs.vmware.com/vsphere/2020/03/vsphere-7-kubernetes-tanzu.html

How to Get vSphere with Kubernetes
https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html

vSphere with Kubernetes 101 Whitepaper
https://blogs.vmware.com/vsphere/2020/03/vsphere-with-kubernetes-101.html



Categories: Fusion Middleware

Ever wondered if Cloud Foundry can run on Kubernetes?

Wed, 2020-04-15 23:36
Well yep it's possible now and is available to be tested now as per the repo below. In this post we will show what we can do with cf-for-k8s as it stands now, once installed and some requirements on how to install it.

https://github.com/cloudfoundry/cf-for-k8s

Before we get started it's important to note, this taken directly from the GitHub repo itself.

"This is a highly experimental project to deploy the new CF Kubernetes-centric components on Kubernetes. It is not meant for use in production and is subject to change in the future"

Steps

1. First we need a k8s cluster. I am using k8s on vSphere using VMware Enterprise PKS but you can use GKE or any other cluster that supports the minimum requirements.

To deploy cf-for-k8s as is, the cluster should:
  • be running version 1.14.x, 1.15.x, or 1.16.x
  • have a minimum of 5 nodes
  • have a minimum of 3 CPU, 7.5GB memory per node
2. There are also some IaaS requirements as shown below.



  • Supports LoadBalancer services
  • Defines a default StorageClass 


  • 3. Finally requirements for pushing source-code based apps to Cloud Foundry means we need a OCI compliant registry. I am using GCR but Docker Hub also works.

    Under the hood, cf-for-k8s uses Cloud Native buildpacks to detect and build the app source code into an oci compliant image and pushes the app image to the registry. Though cf-for-k8s has been tested with Google Container Registry and Dockerhub.com, it should work for any external OCI compliant registry.

    So if you like me and using GCR and following along you will need to create an IAM account with storage privileges for GCR. Assuming you want to create a new IAM account on GCP follow these steps ensuring you set your GCP project id as shown below

    $ export GCP_PROJECT_ID={project-id-in-gcp}

    $ gcloud iam service-accounts create push-image

    $ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin

    $ gcloud iam service-accounts keys create \

      --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      gcr-storage-admin.json

    4. So to install cf-for-k8s we simply follow the detailed steps below.

    https://github.com/cloudfoundry/cf-for-k8s/blob/master/docs/deploy.md

    Note: We are using GCR so the generate values script we run looks as follows which injects our GCR IAM account keys into the YML file if we performed the step above 

    $ ./hack/generate-values.sh -d DOMAIN -g ./gcr-push-storage-admin.json > /tmp/cf-values.yml

    5. So in about 8 minutes or so you should have Cloud Foundry running on your Kubernetes cluster. Let's run a series of commands to verify that.

    - Here we see a set of Cloud Foundry namespaces named "cf-{name}"
      
    $ kubectl get ns
    NAME STATUS AGE
    cf-blobstore Active 8d
    cf-db Active 8d
    cf-system Active 8d
    cf-workloads Active 8d
    cf-workloads-staging Active 8d
    console Active 122m
    default Active 47d
    istio-system Active 8d
    kpack Active 8d
    kube-node-lease Active 47d
    kube-public Active 47d
    kube-system Active 47d
    metacontroller Active 8d
    pks-system Active 47d
    vmware-system-tmc Active 12d

    - Let's check the Cloud Foundry system is up and running by inspecting the status of the PODS as shown below
      
    $ kubectl get pods -n cf-system
    NAME READY STATUS RESTARTS AGE
    capi-api-server-6d89f44d5b-krsck 5/5 Running 2 8d
    capi-api-server-6d89f44d5b-pwv4b 5/5 Running 2 8d
    capi-clock-6c9f6bfd7-nmjrd 2/2 Running 0 8d
    capi-deployment-updater-79b4dc76-g2x6s 2/2 Running 0 8d
    capi-kpack-watcher-6c67984798-2x5n2 2/2 Running 0 8d
    capi-worker-7f8d499494-cd8fx 2/2 Running 0 8d
    cfroutesync-6fb9749-cbv6w 2/2 Running 0 8d
    eirini-6959464957-25ttx 2/2 Running 0 8d
    fluentd-4l9ml 2/2 Running 3 8d
    fluentd-mf8x6 2/2 Running 3 8d
    fluentd-smss9 2/2 Running 3 8d
    fluentd-vfzhl 2/2 Running 3 8d
    fluentd-vpn4c 2/2 Running 3 8d
    log-cache-559846dbc6-p85tk 5/5 Running 5 8d
    metric-proxy-76595fd7c-x9x5s 2/2 Running 0 8d
    uaa-79d77dbb77-gxss8 2/2 Running 2 8d

    - Lets view the ingress gateway resources in the namespace "
      
    $ kubectl get all -n istio-system
    NAME READY STATUS RESTARTS AGE
    pod/istio-citadel-bc7957fc4-nn8kx 1/1 Running 0 8d
    pod/istio-galley-6478b6947d-6dl9h 2/2 Running 0 8d
    pod/istio-ingressgateway-fcgvg 2/2 Running 0 8d
    pod/istio-ingressgateway-jzkpj 2/2 Running 0 8d
    pod/istio-ingressgateway-ptjzz 2/2 Running 0 8d
    pod/istio-ingressgateway-rtwk4 2/2 Running 0 8d
    pod/istio-ingressgateway-tvz8p 2/2 Running 0 8d
    pod/istio-pilot-67955bdf6f-nrhzp 2/2 Running 0 8d
    pod/istio-policy-6b786c6f65-m7tj5 2/2 Running 3 8d
    pod/istio-sidecar-injector-5669cc5894-tq55v 1/1 Running 0 8d
    pod/istio-telemetry-77b745cd6b-wn2dx 2/2 Running 3 8d

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/istio-citadel ClusterIP 10.100.200.216 <none> 8060/TCP,15014/TCP 8d
    service/istio-galley ClusterIP 10.100.200.214 <none> 443/TCP,15014/TCP,9901/TCP,15019/TCP 8d
    service/istio-ingressgateway LoadBalancer 10.100.200.105 10.195.93.142 15020:31515/TCP,80:31666/TCP,443:30812/TCP,15029:31219/TCP,15030:31566/TCP,15031:30615/TCP,15032:30206/TCP,15443:32555/TCP 8d
    service/istio-pilot ClusterIP 10.100.200.182 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 8d
    service/istio-policy ClusterIP 10.100.200.98 <none> 9091/TCP,15004/TCP,15014/TCP 8d
    service/istio-sidecar-injector ClusterIP 10.100.200.160 <none> 443/TCP 8d
    service/istio-telemetry ClusterIP 10.100.200.5 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 8d

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/istio-ingressgateway 5 5 5 5 5 <none> 8d

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/istio-citadel 1/1 1 1 8d
    deployment.apps/istio-galley 1/1 1 1 8d
    deployment.apps/istio-pilot 1/1 1 1 8d
    deployment.apps/istio-policy 1/1 1 1 8d
    deployment.apps/istio-sidecar-injector 1/1 1 1 8d
    deployment.apps/istio-telemetry 1/1 1 1 8d

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/istio-citadel-bc7957fc4 1 1 1 8d
    replicaset.apps/istio-galley-6478b6947d 1 1 1 8d
    replicaset.apps/istio-pilot-67955bdf6f 1 1 1 8d
    replicaset.apps/istio-policy-6b786c6f65 1 1 1 8d
    replicaset.apps/istio-sidecar-injector-5669cc5894 1 1 1 8d
    replicaset.apps/istio-telemetry-77b745cd6b 1 1 1 8d

    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot 0%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-policy Deployment/istio-policy 2%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-telemetry Deployment/istio-telemetry 7%/80% 1 5 1 8d

    You can use kapp to verify your install as follows:

    $ kapp list
    Target cluster 'https://cfk8s.mydomain:8443' (nodes: 46431ba8-2048-41ea-a5c9-84c3a3716f6e, 4+)

    Apps in namespace 'default'

    Name  Label                                 Namespaces                                                                                                  Lcs   Lca
    cf    kapp.k14s.io/app=1586305498771951000  (cluster),cf-blobstore,cf-db,cf-system,cf-workloads,cf-workloads-staging,istio-system,kpack,metacontroller  true  8d

    Lcs: Last Change Successful
    Lca: Last Change Age

    1 apps

    Succeeded

    6. Now Cloud Foundry is running we need to configure DNS on your IaaS provider to point the wildcard subdomain of your system domain and the wildcard subdomain of all apps domains to point to external IP of the Istio Ingress Gateway service. You can retrieve the external IP of this service by running a command as follows

    $ kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

    Note: The DNS A record wildcard entry would look as follows ensuring you use the DOMAIN you told the install script you were using

    DNS entry should be mapped to : *.{DOMAIN}

    7. Once done we can use DIG to verify we have setup our DNS wildcard entry correct. We looking for a ANSWER section which maps to the IP address we got from

    $ dig api.mydomain

    ; <<>> DiG 9.10.6 <<>> api.mydomain
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- 58127="" font="" id:="" noerror="" opcode:="" query="" status:="">
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;api.mydomain. IN A

    ;; ANSWER SECTION:
    api.mydomain. 60 IN A 10.0.0.1

    ;; Query time: 216 msec
    ;; SERVER: 10.10.6.6#53(10.10.6.7)
    ;; WHEN: Thu Apr 16 11:46:59 AEST 2020
    ;; MSG SIZE  rcvd: 83

    8. So now we are ready to login using Cloud Foundry CLI. Make sure your using the latest version as shown below

    $ cf version
    cf version 6.50.0+4f0c3a2ce.2020-03-03

    Note: You can install Cloud Foundry CLI as follows

    https://github.com/cloudfoundry/cli

    9. Ok so we are ready to target the API endpoint and login. As you may as guessed the API endpoint is "api.{DOMNAIN" so go ahead and do that as shown below. If this fails it means you have to re-visit steps 6 and 7 above.

    $ cf api https://api.mydomain --skip-ssl-validation
    Setting api endpoint to https://api.mydomain...
    OK

    api endpoint:   https://api.mydomain
    api version:    2.148.0

    10. So now we need the admin password to login using UAA and this was generated for us when we run the generate script above and produced our install YML. You can run a simple command as follows using the YML file to get the password.

    $ head cf-values.yml
    #@data/values
    ---
    system_domain: "mydomain"
    app_domains:
    #@overlay/append
    - "mydomain"
    cf_admin_password: 5nxm5bnl23jf5f0aivbs

    cf_blobstore:
      secret_key: 04gihynpr0x4dpptc5a5

    11. So to login I use a script as follows which will create a space for me which I then target to applications into.

    cf auth admin 5nxm5bnl23jf5f0aivbs
    cf target -o system
    cf create-space development
    cf target -s development

    Output when we run this script or just type each command one at a time will look as follows.

    API endpoint: https://api.mydomain
    Authenticating...
    OK

    Use 'cf target' to view or set your target org and space.
    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development
    Creating space development in org system as admin...
    OK

    Space development already exists

    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development

    12. If we type in "cf apps" we will see we have no applications deployed which is expected.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    No apps found

    13. So lets deploy out first application. In this example we will use a NodeJS cloud foundry application which exists at the following GitHub repo. We will deploy it using it's source code only. To do that we will clone it onto our file system as shown below.

    https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    $ git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    14. Edit cf-sample-app-nodejs/manifest.yml to look as follows by removing radom-route entry

    ---
    applications:
    - name: cf-nodejs
      memory: 512M
      instances: 1

    15. Now to push the Node app we are going to use two terminal windows. One to actually push the app and the other to view the logs.


    16. Now in first terminal window issue this command ensuring the cloned app from above exists from the directory your in as shown by the path it's referencing

    $ cf push test-node-app -p ./cf-sample-app-nodejs

    17. In the second terminal window issue this command.

    $ cf logs test-node-app

    18. You should see log output while the application is being pushed.



    19. Wait for the "cf push" to complete as shown below

    ....

    Waiting for app to start...

    name:                test-node-app
    requested state:     started
    isolation segment:   placeholder
    routes:              test-node-app.system.run.haas-210.pez.pivotal.io
    last uploaded:       Thu 16 Apr 13:04:59 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T03:05:13Z   0.0%   0 of 1G   0 of 1G


    Verify we have deployed our Node app and it has a fully qualified URL for us to access it as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           1/1         1G       1G     test-node-app.mydomain

    ** Browser **



    Ok so what actually has happened on our k8s cluster to get this application deployed? There was a series of steps performed which is why "cf push" blocks until all these have happened. At a high level these are the 3 main steps
    1. Capi uploads the code, puts it in internal blob store
    2. kpack builds the image and stores in the registry you defined at install time (GCR for us)
    3. Eirini schedules the pod

    GCR "cf-workloads" folder


    kpack is where lots of magic actually occurs. kpack is based on the CNCF sandbox project knows as Cloud Native Buildpacks and can create OCI compliant images from source code and/or artifacts automatically for you. CNB/kpack doesn't just stop there to find out more I suggest going to the following links.

    https://tanzu.vmware.com/content/blog/introducing-kpack-a-kubernetes-native-container-build-service

    https://buildpacks.io/

    Buildpacks provide a higher-level abstraction for building apps compared to Dockerfiles.

    Specifically, buildpacks:
    • Provide a balance of control that reduces the operational burden on developers and supports enterprise operators who manage apps at scale.
    • Ensure that apps meet security and compliance requirements without developer intervention.
    • Provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
    • Rely on compatibility guarantees to safely apply patches without rebuilding artifacts and without unintentionally changing application behavior.
    20. Let's run a series of kubectl commands to see what was created. All of our apps get deployed to the namespace "cf-workloads".

    - What POD's are running in cf-workloads
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 26m

    - You will notice we have a POD running with 2 containers BUT also we have a Service which is used internally to route to the or more PODS using ClusterIP as shown below
      
    $ kubectl get svc -n cf-workloads
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 27m

    - Each POD has two containers named as follows.

    opi : This is your actual container instance running your code
    istio-proxy: This as the name suggests is a proxy container which among other things routes requests to the OPI container image when required

    21. Ok so let's scale our application to run 2 instances. To do that we simply use Cloud Foundry CLI as follows

    $ cf scale test-node-app -i 2
    Scaling app test-node-app in org system / space development as admin...
    OK

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    And using kubectl as expected we end up with another POD created for the second instance
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 44m
    test-node-app-development-c346b24349-1 2/2 Running 0 112s

    If we dig a bit deeper will see that a Statefulset backs the application deployment shown below
      
    $ kubectl get all -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    pod/test-node-app-development-c346b24349-0 2/2 Running 0 53m
    pod/test-node-app-development-c346b24349-1 2/2 Running 0 10m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 53m

    NAME READY AGE
    statefulset.apps/test-node-app-development-c346b24349 2/2 53m

    Ok so as you may have guessed we can deploy many different types of apps because kpack supports multiple languages including Java, Go, Python etc.

    22. Let's deploy a Go application as follows.

    $ git clone https://github.com/swisscom/cf-sample-app-go

    $ cf push my-go-app -m 64M -p ./cf-sample-app-go
    Pushing app my-go-app to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:       my-go-app
      path:       /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/cf-sample-app-go
    + memory:     64M
      routes:
    +   my-go-app.mydomain

    Creating app my-go-app...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.43 KiB / 1.43 KiB [====================================================================================] 100.00% 1s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                my-go-app
    requested state:     started
    isolation segment:   placeholder
    routes:              my-go-app.mydomain
    last uploaded:       Thu 16 Apr 14:06:25 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   64M
         state     since                  cpu    memory     disk      details
    #0   running   2020-04-16T04:06:43Z   0.0%   0 of 64M   0 of 1G

    We can invoke the application using "curl" or something more modern like "HTTPie"

    $ http http://my-go-app.mydomain
    HTTP/1.1 200 OK
    content-length: 59
    content-type: text/plain; charset=utf-8
    date: Thu, 16 Apr 2020 04:09:46 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 6

    Congratulations! Welcome to the Swisscom Application Cloud!

    If we tailed the logs using "cf logs my-go-app" we would of seen that kpack intelligently determine this is a GO app and uses the Go buildpack to compile the code and produce a container image.

    ...
    2020-04-16T14:05:27.52+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Warning: Image "gcr.io/fe-papicella/cf-workloads/f0072cfa-0e7e-41da-9bf7-d34b2997fb94" not found
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Compiler Buildpack 0.0.83
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go 1.13.7: Contributing to layer
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Downloading from https://buildpacks.cloudfoundry.org/dependencies/go/go-1.13.7-bionic-5bb47c26.tgz
    2020-04-16T14:05:35.13+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Verifying checksum
    2020-04-16T14:05:35.63+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Expanding to /layers/org.cloudfoundry.go-compiler/go
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Mod Buildpack 0.0.84
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Setting environment variables
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    2020-04-16T14:05:41.68+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT github.com/swisscom/cf-sample-app-go
    2020-04-16T14:05:41.69+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    ...

    Using "cf apps" we now have two applications deployed as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    my-go-app       started           1/1         64M      1G     my-go-app.mydomain
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    23. Finally kpack and the buildpacks eco system can deploy already created artifacts. The Java Buildpack is capable of not only deploying from source but can also use a FAT spring boot JAR file for example as shown below. In this example we have packaged the artifact we wish to deploy as "PivotalMySQLWeb-1.0.0-SNAPSHOT.jar".

    $ cf push piv-mysql-web -p PivotalMySQLWeb-1.0.0-SNAPSHOT.jar -i 1 -m 1g
    Pushing app piv-mysql-web to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:        piv-mysql-web
      path:        /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/PivotalMySQLWeb-1.0.0-SNAPSHOT.jar
    + instances:   1
    + memory:      1G
      routes:
    +   piv-mysql-web.mydomain

    Creating app piv-mysql-web...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.03 MiB / 1.03 MiB [====================================================================================] 100.00% 2s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T04:17:43Z   0.0%   0 of 1G   0 of 1G


    Of course the usual commands you expect from CF CLI still exist. Here are some examples as follows.

    $ cf app piv-mysql-web
    Showing health and status for app piv-mysql-web in org system / space development as admin...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory         disk      details
    #0   running   2020-04-16T04:17:43Z   0.1%   195.8M of 1G   0 of 1G

    $ cf env piv-mysql-web
    Getting env variables for app piv-mysql-web in org system / space development as admin...
    OK

    System-Provided:

    {
     "VCAP_APPLICATION": {
      "application_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "application_name": "piv-mysql-web",
      "application_uris": [
       "piv-mysql-web.mydomain"
      ],
      "application_version": "750d9530-e756-4b74-ac86-75b61c60fe2d",
      "cf_api": "https://api. mydomain",
      "limits": {
       "disk": 1024,
       "fds": 16384,
       "mem": 1024
      },
      "name": "piv-mysql-web",
      "organization_id": "8ae94610-513c-435b-884f-86daf81229c8",
      "organization_name": "system",
      "process_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "process_type": "web",
      "space_id": "7f3d78ae-34d4-42e4-8ab8-b34e46e8ad1f",
      "space_name": "development",
      "uris": [
       "piv-mysql-web. mydomain"
      ],
      "users": null,
      "version": "750d9530-e756-4b74-ac86-75b61c60fe2d"
     }
    }

    No user-defined env variables have been set

    No running env variables have been set

    No staging env variables have been set

    So what about some sort of UI? That brings as to step 24

    24. Let's start by installing helm using a script as follows

    #!/usr/bin/env bash

    echo "install helm"
    # installs helm with bash commands for easier command line integration
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
    # add a service account within a namespace to segregate tiller
    kubectl --namespace kube-system create sa tiller
    # create a cluster role binding for tiller
    kubectl create clusterrolebinding tiller \
        --clusterrole cluster-admin \
        --serviceaccount=kube-system:tiller

    echo "initialize helm"
    # initialized helm within the tiller service account
    helm init --service-account tiller
    # updates the repos for Helm repo integration
    helm repo update

    echo "verify helm"
    # verify that helm is installed in the cluster
    kubectl get deploy,svc tiller-deploy -n kube-system

    Once installed you can verify helm is working by using "helm ls" which should come back with no output as you haven't installed anything with helm yet.

    25. Run the following to install Stratos an open source Web UI for Cloud Foundry

    For more information on Stratos visit this URL - https://github.com/cloudfoundry/stratos

    $ helm install stratos/console --namespace=console --name my-console --set console.service.type=LoadBalancer
    NAME:   my-console
    LAST DEPLOYED: Thu Apr 16 09:48:19 2020
    NAMESPACE: console
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1/Deployment
    NAME        READY  UP-TO-DATE  AVAILABLE  AGE
    stratos-db  0/1    1           0          2s

    ==> v1/Job
    NAME                   COMPLETIONS  DURATION  AGE
    stratos-config-init-1  0/1          2s        2s

    ==> v1/PersistentVolumeClaim
    NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    console-mariadb                   Bound   pvc-4ff20e21-1852-445f-854f-894bc42227ce  1Gi       RWO           fast          2s
    my-console-encryption-key-volume  Bound   pvc-095bb7ed-7be9-4d93-b63a-a8af569361b6  20Mi      RWO           fast          2s

    ==> v1/Pod(related)
    NAME                         READY  STATUS             RESTARTS  AGE
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s

    ==> v1/Role
    NAME              AGE
    config-init-role  2s

    ==> v1/RoleBinding
    NAME                              AGE
    config-init-secrets-role-binding  2s

    ==> v1/Secret
    NAME                  TYPE    DATA  AGE
    my-console-db-secret  Opaque  5     2s
    my-console-secret     Opaque  5     2s

    ==> v1/Service
    NAME                TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
    my-console-mariadb  ClusterIP     10.100.200.162           3306/TCP       2s
    my-console-ui-ext   LoadBalancer  10.100.200.171  10.195.93.143  443:31524/TCP  2s

    ==> v1/ServiceAccount
    NAME         SECRETS  AGE
    config-init  1        2s

    ==> v1/StatefulSet
    NAME     READY  AGE
    stratos  0/1    2s

    26. You can verify it installed a few ways as shown below.

    - Use helm with "helm ls"
      
    $ helm ls
    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    my-console 1 Thu Apr 16 09:48:19 2020 DEPLOYED console-3.0.0 3.0.0 console

    - Verify everything is running using "kubectl get all -n console"
      
    $ k get all -n console
    NAME READY STATUS RESTARTS AGE
    pod/stratos-0 0/2 ContainerCreating 0 40s
    pod/stratos-config-init-1-2t47x 0/1 Completed 0 40s
    pod/stratos-db-69ddf7f5f7-gb8xm 0/1 Running 0 40s

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/my-console-mariadb ClusterIP 10.100.200.162 <none> 3306/TCP 40s
    service/my-console-ui-ext LoadBalancer 10.100.200.171 10.195.1.1 443:31524/TCP 40s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/stratos-db 0/1 1 0 41s

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/stratos-db-69ddf7f5f7 1 1 0 41s

    NAME READY AGE
    statefulset.apps/stratos 0/1 41s

    NAME COMPLETIONS DURATION AGE
    job.batch/stratos-config-init-1 1/1 27s 42s

    27. Now to open up the UI web app we just need the external IP from "service/my-console-ui-ext" as per the output above.

    Navigate to https://{external-ip}:443

    28. Create a local user to login using the password you set and and the username "admin".

    Note: The password is just to get into the UI. It can be anything you want it to be.



    29. Now we need to click on "Endpoints" and register a Cloud Foundry endpoint using the same login details we used with the Cloud Foundry API earlier at step 11.

    Note: The API endpoint is what you used at step 9 and make sure to skip SSL validation

    Once connected there are our deployed applications.



    Summary 

    In this post we explored what running Cloud Foundry on Kubernetes looks like. For those familiar with Cloud Foundry or Tanzu Application Service (formally known as Pivotal Application Service) from a development perspective everything is the same using familiar CF CLI commands. What changes here is the footprint to run Cloud Foundry is much less complicated and runs on Kubernetes itself meaning even more places to run Cloud Foundry then ever before plus the ability to leverage community based projects on Kubernetes further more simplifying Cloud Foundry.

    For more information see the links below.

    More Information

    GitHub Repo
    https://github.com/cloudfoundry/cf-for-k8s

    VMware Tanzu Application Service for Kubernetes (Beta)
    https://network.pivotal.io/products/tas-for-kubernetes/
    Categories: Fusion Middleware

    Thank you kubie exactly what I needed

    Sun, 2020-04-05 22:59
    On average I deal with at least 5 different Kubernetes clusters so today when I saw / heard of kubie I had to install it.

    kubie is an alternative to kubectx, kubens and the k on prompt modification script. It offers context switching, namespace switching and prompt modification in a way that makes each shell independent from others

    Installing kubie right now involved download the release from the link below. Homebrew support is pending

    https://github.com/sbstp/kubie/releases

    Once added to your path it's as simple as this

    1. Check kubie is in your path

    $ which kubie
    /usr/local/bin/kubie

    2. Run "kubie ctx" as follows and select the "apples" k8s context

    papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie ctx



    [apples|default] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    3. Switch to a new namespace as shown below and watch how the PS1 prompt changes to indicate the k8s conext and new namespace we have set as result of the command below

    $ kubectl config set-context --current --namespace=vmware-system-tmc
    Context "apples" modified.

    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    4. Finally kubie exec is a subcommand that allows you to run commands inside of a context, a bit like kubectl exec allows you to run a command inside a pod. Here is some examples below
      
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples vmware-system-tmc kubectl get pods
    NAME READY STATUS RESTARTS AGE
    agent-updater-75f88b44f6-9f9jj 1/1 Running 0 2d23h
    agentupdater-workload-1586145240-kmwln 1/1 Running 0 3s
    cluster-health-extension-76d9b549b5-dlhms 1/1 Running 0 2d23h
    data-protection-59c88488bd-9wxk2 1/1 Running 0 2d23h
    extension-manager-8d69d95fd-sgksw 1/1 Running 0 2d23h
    extension-updater-77fdc4574d-fkcwb 1/1 Running 0 2d23h
    inspection-extension-64857d4d95-nl76f 1/1 Running 0 2d23h
    intent-agent-6794bb7995-jmcxg 1/1 Running 0 2d23h
    policy-sync-extension-7c968c9dcd-x4jvl 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-ppbn6 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-r82h4 1/1 Running 1 2d23h
    sync-agent-d67f95889-qbxtb 1/1 Running 6 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples default kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pbs-demo-image-build-1-mnh6v-build-pod 0/1 Completed 0 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    More Information

    Blog Page:
    https://blog.sbstp.ca/introducing-kubie/

    GitHub Page:
    https://github.com/sbstp/kubie
    Categories: Fusion Middleware

    VMware enterprise PKS 1.7 has just been released

    Thu, 2020-04-02 22:51
    VMware enterprise PKS 1.7 was just released. For details please review the release notes using the link below.

    https://docs.pivotal.io/pks/1-7/release-notes.html



    More Information

    https://docs.pivotal.io/pks/1-7/index.html


    Categories: Fusion Middleware

    kpack 0.0.6 and Docker Hub secret annotation change for Docker Hub

    Mon, 2020-03-02 16:53
    I decided to try out the 0.0.6 release of kpack and noticed a small change to how you define your registry credentials when using Docker Hub. If you fail to do this it will fail to use Docker Hub as your registry with errors as follows when trying to export the image.

    [export] *** Images (sha256:1335a241ab0428043a89626c99ddac8dfb2719b79743652e535898600439e80f):
    [export]       pasapples/pbs-demo-image:latest - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export]       index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548 - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export] ERROR: failed to export: failed to write image to the following tags: [pasapples/pbs-demo-image:latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]],[index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]]

    Previously in kpack 0.0.5 you defined your Dockerhub registry as follows:

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: index.docker.io
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    Now with kpack 0.0.6 you need to define the "annotations" using an url with HTTPS and "/v1" appended to the end of the URL as shown below.

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: https://index.docker.io/v1/
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    More Information

    https://github.com/pivotal/kpack
    Categories: Fusion Middleware

    Pages