The goal of this project is to test/validate different approaches to build a runtime using:
-
Java Buildpacks Client
-
Pack client
-
Tekton Pipeline
The following table details the versions of the different tools/images used within the GitHub e2e workflows or examples of this project.
This project currently verifies the following scenario:
| Tool/Image | Version | Tested | Note |
|---|---|---|---|
Lifecycle |
Yes |
- |
|
Platform |
Yes |
- |
|
Pack client |
Yes |
- |
|
Java Buildpacks library |
Yes |
Support platform from 0.4 to 0.13 ! |
|
Quarkus Buildpacks |
Yes |
- |
|
Paketo Build base ubi8 image |
Yes |
Base image to be used to build |
|
Paketo Run base ubi8 image |
Yes |
Image to be used to run an application |
|
Paketo UBI8 builder |
Yes |
Include Node.js, Quarkus Java Buildpack. |
|
Paketo Node.js Extension for ubi |
Yes |
- |
|
Paketo Java Extension for ubi |
Yes |
- |
-
Podman desktop installed and running
-
IDPlatform running kind (>= 0.10)
-
Tekton CLI (>= 0.40)
-
Create an
.envwith the different variables to be used and source it
|
ℹ️
|
For local tests, we suggest to create a kubernetes |
idpbuilder create --dev-password --name buildpackAs the registry that we will access uses the HTTPS protocol and a self-signed certificate, it is then needed to define it as an insecure registry using the following command (when you use podman) in order to avoid a TLS error as the certificate has been signed by an unknown authority:
echo 'printf "\n[[registry]]\nlocation = \""gitea.cnoe.localtest.me:8443\""\ninsecure = true\n" >> /etc/containers/registries.conf' | podman machine ssh --username root --
podman machine stop; podman machine startCreate a Java maven project using as reference the following sample: .github/samples/build-me
Review the DSL of the BuildMe class to configure the Client in order to build a Java project
...
int exitCode = BuildConfig.builder()
.withBuilderImage(new ImageReference("paketobuildpacks/builder-ubi8-base:latest"))
.withOutputImage(new ImageReference(APPLICATION_IMAGE_REF))
.withNewPlatformConfig()
.withEnvironment(envMap)
.endPlatformConfig()
.withNewDockerConfig()
.withAuthConfigs(authInfo)
.withUseDaemon(false)
.endDockerConfig()
.withNewLogConfig()
.withLogger(new SystemLogger())
.withLogLevel("debug")
.and()
.addNewFileContentApplication(new File(PROJECT_PATH))
.build()
.getExitCode();and set the mandatory environment variables.
export PROJECT_PATH=<JAVA_PROJECT_PATH>
# <IMAGE_REF> can be <IMAGE_NAME> without registry or a full registry reference with host, port(optional), path & tag
export IMAGE_REF=<IMAGE_REF>|
ℹ️
|
Review the documentation of the |
First, create a Quarkus Hello example using the following maven command executed in a terminal.
mvn io.quarkus.platform:quarkus-maven-plugin:3.21.3:create \
-DprojectGroupId=dev.snowdrop \
-DprojectArtifactId=quarkus-hello \
-DprojectVersion=1.0 \
-DplatformVersion=3.22.1 \
-Dextensions='resteasy,kubernetes,buildpack'Test the project locally:
cd quarkus-hello
mvn compile quarkus:devIn a separate terminal, curl the HTTP endpoint
curl http://localhost:8080/helloTo build the container image, do the build using the ubi8 builder image paketobuildpacks/builder-ubi8-base:0.0.122 and pass BP_*** env variable(s) in order to configure properly the Quarkus Buildpacks build process:
mvn clean package \
-Dquarkus.container-image.image=gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-hello:1.0 \
-Dquarkus.container-image.build=true \
-Dquarkus.container-image.push=true \
-Dquarkus.buildpack.jvm-builder-image=paketobuildpacks/builder-ubi8-base:0.1.42 \
-Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21 \
-Dquarkus.buildpack.registry-user."gitea.cnoe.localtest.me:8443"=giteaAdmin \
-Dquarkus.buildpack.registry-password."gitea.cnoe.localtest.me:8443"=developer|
ℹ️
|
To get the debug messages and configure the logger ...
-Dquarkus.buildpack.log-level=debug \
-Dorg.jboss.logging.provider=slf4j \
-Dorg.slf4j.simpleLogger.log.io.quarkus.container.image.buildpack.deployment=DEBUG \
-Dorg.slf4j.simpleLogger.log.dev.snowdrop.buildpack.docker=DEBUG |
Next, start the container and curl the endpoint
podman run -i --rm -p 8080:8080 gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-hello:1.0
curl http://localhost:8080/helloTo validate this scenario top of the existing quarkus-hello project, we will use the pack client.
podman rmi $REGISTRY_HOST/giteaadmin/quarkus-hello:1.0
pack build $REGISTRY_HOST/giteaadmin/quarkus-hello:1.0 \
--builder paketobuildpacks/builder-ubi8-base:latest \
--volume $HOME/.m2:/home/cnb/.m2:rw \
-e BP_JVM_VERSION=21Next, start the container and curl the endpoint curl http://localhost:8080/hello
podman run -i --rm -p 8080:8080 gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-hello:1.0To use Tekton, it is needed to have a k8s cluster (>= 1.28), and a container registry.
To install it like the dashboard, we will rely on the idplatform cluster we created using the idpbuilder tool
idpbuilder create --dev-password --name buildpack \
-p https://github.com/ch007m/my-idp-packages//tektonWhen the platform is ready, you should be able to access the Tekton UI at the following address: https://tekton-ui.cnoe.localtest.me:8443/. You can verify if Tekton has been well installed using the Argo CD console: https://argocd.cnoe.localtest.me:8443/
Deploy now the different resources that we need to build an application:
kubectl delete -f https://raw.githubusercontent.com/tektoncd/catalog/refs/heads/main/task/buildpacks-phases/0.3/buildpacks-phases.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/refs/heads/main/task/buildpacks-phases/0.3/buildpacks-phases.yamlCreate a dockercfg’s secret using the gitea registry credentials to access it and link it to the ServiceAccount that Tekton will use.
kubectl apply -f k8s/tekton/secret-dockercfg.yml
kubectl apply -f k8s/tekton/sa-with-reg-creds.ymlCreate a PVC
kubectl apply -f k8s/tekton/ws-pvc.ymlSet next the following variables:
IMAGE_NAME=<CONTAINER_REGISTRY>/<ORG>/quarkus-hello
BUILDER_IMAGE=<PAKETO_BUILDER_IMAGE_OR_YOUR_OWN_BUILDER_IMAGE>It is time to create a Pipelinerun to build the Quarkus application
IMAGE_NAME=my-gitea-http.gitea.svc.cluster.local:3000/giteaadmin/quarkus-hello
CNB_BUILDER_IMAGE=paketobuildpacks/builder-ubi8-base:0.1.1
CNB_INSECURE_REGISTRIES=my-gitea-http.gitea.svc.cluster.local:3000
echo "apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: buildpacks
spec:
workspaces:
- name: source-ws
tasks:
- name: fetch-repository
taskRef:
resolver: http
params:
- name: url
value: https://raw.githubusercontent.com/tektoncd/catalog/refs/heads/main/task/git-clone/0.10/git-clone.yaml
workspaces:
- name: output
workspace: source-ws
params:
- name: url
value: https://github.com/quarkusio/quarkus-quickstarts.git
- name: deleteExisting
value: true
- name: buildpacks-phases
taskRef:
name: buildpacks-phases
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: source-ws
params:
- name: APP_IMAGE
value: $CONTAINER_IMAGE
- name: SOURCE_SUBPATH
value: getting-started
- name: CNB_BUILDER_IMAGE
value: $CNB_BUILDER_IMAGE
- name: CNB_INSECURE_REGISTRIES
value: $CNB_INSECURE_REGISTRIES
- name: CNB_LOG_LEVEL
value: $CNB_LOG_LEVEL
- name: CNB_ENV_VARS
value:
- BP_JVM_VERSION=21
---
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: buildpacks
spec:
taskRunTemplate:
serviceAccountName: sa-with-creds
pipelineRef:
name: buildpacks
workspaces:
- name: source-ws
subPath: source
persistentVolumeClaim:
claimName: ws-pvc" | kubectl apply -f -Follow the execution of the pipeline using the dashboard: https://tekton-ui.cnoe.localtest.me:8443/#/namespaces/default/pipelineruns or using the client: tkn pipelinerun logs -f
When the pipelinerun finishes and no error has been reported, then launch the container
podman run -i --rm -p 8080:8080 gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-helloSee the project documentation for more information: https://github.com/shipwright-io/build
To use shipwright, it is needed to have a k8s cluster, a container registry and Tekton installed (>= v0.62)
Next, deploy the release 0.15.x of shipwright
kubectl create -f https://github.com/shipwright-io/build/releases/download/v0.15.6/release.yamlApply the following hack to create a self-signed certificate on the cluster, otherwise the shipwright webhook will fail to start
curl --silent --location https://raw.githubusercontent.com/shipwright-io/build/v0.15.6/hack/setup-webhook-cert.sh | bashNext, install the Buildpacks BuildStrategy using the following command:
kubectl delete -f k8s/shipwright/clusterbuildstrategy.yml
kubectl apply -f k8s/shipwright/clusterbuildstrategy.ymlCreate a Build CR using as source the Quarkus Getting started repository:
kubectl delete -f k8s/shipwright/build.yml
kubectl apply -f k8s/shipwright/build.ymlTo check the Build resource you just created, execute the following command:
kubectl get build
NAME REGISTERED REASON BUILDSTRATEGYKIND BUILDSTRATEGYNAME CREATIONTIME
buildpack-quarkus-build True Succeeded ClusterBuildStrategy buildpacks 43sCreate a configMap containing the self-signed certificate of the registry
kubectl get secret -n default idpbuilder-cert -ojson | jq -r '.data."ca.crt"' | base64 -d > ca.cert
kubectl delete configmap certificate-registry
kubectl create configmap certificate-registry \
--from-file=ca.certCreate a secret containing the registry credentials as it is used by the Shipwright ServiceAccount
kubectl delete secret dockercfg
kubectl apply -f k8s/shipwright/secret-dockercfg.ymlTo trigger a BuildRun do this:
kubectl delete buildrun -lbuild.shipwright.io/name=buildpack-quarkus-build
kubectl delete -f k8s/shipwright/pvc.yml
kubectl delete -f k8s/shipwright/sa.yml
kubectl create -f k8s/shipwright/sa.yml
kubectl create -f k8s/shipwright/pvc.yml
kubectl create -f k8s/shipwright/buildrun.ymlWait until your BuildRun is completed, and then you can view it as follows:
kubectl get buildrun -lbuild.shipwright.io/name=buildpack-quarkus-build
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
buildpack-quarkus-buildrun-fbs84 True Succeeded 103s 25sWhen the task is finished and no error is reported, then launch the container
podman run -i --rm -p 8080:8080 gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-helloEnjoy !
The instructions described hereafter should help to resolve the issue when lifecycle access the gitea registry inside a pod
set -x BUILDER paketobuildpacks/builder-ubi8-base:0.1.1
echo "Test 1 using patched lifecycle, insecure registry define & CNB_REGISTRY_AUTH - OK"
podman run -it \
-e CNB_PLATFORM_API=0.13 \
-e CNB_REGISTRY_AUTH='{"https://gitea.cnoe.localtest.me:8443": "Basic Z2l0ZWFBZG1pbjpkZXZlbG9wZXI="}' \
--network=host \
$BUILDER \
/cnb/lifecycle/analyzer \
-log-level=debug \
-layers=/layers \
-run-image=paketobuildpacks/run-ubi8-base:0.0.114 \
-uid=1002 \
-gid=1000 \
-insecure-registry=gitea.cnoe.localtest.me:8443 \
gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-hello
podman run -it \
-e CNB_PLATFORM_API=0.13 \
-v $(pwd)/auth.json:/home/cnb/.docker/config.json:ro \
--network=host \
$BUILDER \
/cnb/lifecycle/analyzer \
-log-level=debug \
-layers=/layers \
-run-image=paketobuildpacks/run-ubi8-base:0.0.114 \
-uid=1002 \
-gid=1000 \
-insecure-registry=gitea.cnoe.localtest.me:8443 \
gitea.cnoe.localtest.me:8443/giteaadmin/quarkus-helloUsing a pod
kubectl delete secret/dockercfg
kubectl create secret generic dockercfg \
--from-file=.dockerconfigjson=$(pwd)/.tmp/dockercfg.json \
--type=kubernetes.io/dockerconfigjson
kubectl delete -f $(pwd)/.tmp/task-pod-1.yaml; kubectl apply -f $(pwd)/.tmp/task-pod-1.yaml