Customize migration plan for Tomcat servers

You should review the migration plan file that resulted from creating a migration. Customize the file before executing the migration. The details of your migration plan are used to extract the workload container artifacts from the source.

This section describes the contents of the migration and the kinds of customizations you might consider before you execute the migration and generate deployment artifacts.

Before you begin

This topic assumes that you've already created a migration and have the migration plan file.

Edit the migration plan

After you have copied the file system and analyzed it, you can find the migration plan in the new directory that is created in the specified output path: ANALYSIS_OUTPUT_PATH/config.yaml.

Edit the migration plan as necessary and save the changes.

Review your migration plan's details and guiding comments to add information as needed. Specifically, consider edits around the following sections.

Specify the Docker image

In the migration plan, we generate a Docker community image tag based on the Tomcat version, Java version, and Java vendor.

  • Tomcat version: the Tomcat version is detected and converted to a major version (minor versions are not supported). If we fail to detect a Tomcat version, then baseImage.name contains an empty string.
  • Java version: the Java version depends on the java-version parameter. It is set to 11 by default.
  • Java vendor: the Java vendor is set to a constant: temurin.

For example, the Docker community image tag generated for Tomcat version 9.0, Java version 11, and Java vendor temurin is tomcat:9.0-jre11-temurin.

In the migration plan, the baseImage.name field represents the Docker image tag used as the base of the container image.

The original Tomcat and Java versions detected on the source VM are contained in discovery-report.yaml that is generated by the initial discovery.

If you want to change the Docker community image, or provide your own Docker image, you can modify the baseImage.name in your migration plan using the following format:

tomcatServers:
    - name: latest
      . . .
      images:
        - name: tomcat-latest
          . . .
          baseImage:
            name: BASE_IMAGE_NAME

Replace BASE_IMAGE_NAME with the Docker image that you want to use as the base of the container image.

Update Tomcat installation path

During the migration process, if your target image has a non default CATALINA_HOME path, then you can specify a custom CATALINA_HOME path. Edit the target catalinaHome field directly in your migration plan:

tomcatServers:
  - name: latest
    . . .
    images:
      - name: tomcat-latest
        . . .
        baseImage:
          name: BASE_IMAGE_NAME
          catalinaHome: PATH

Replace PATH with the Tomcat installation path.

Customize user and group

During the migration process, if your target image runs with a different user and group than root:root, then you can specify a custom user and group under which you want the application to run. Edit the user and group directly in your migration plan:

tomcatServers:
  - name: latest
    . . .
    images:
      - name: tomcat-latest
        . . .
        userName: USERNAME
        groupName: GROUP_NAME

Replace the following:

  • USERNAME: the username that you want to use
  • GROUP_NAME: the group name that you want to use

Configure SSL

When you create a new Tomcat migration, a discovery process scans the server against the different applications that are discovered.

After discovery, the following fields are automatically populated in the migration plan:

  • excludeFiles: lists the files and directories to exclude from the migration.

    By default, all the sensitive paths and certificates located during discovery are automatically added to this field and are excluded from the migration. If you remove a path from this list, then the file or directory is uploaded to the container image. To exclude a file or directory from the container image, add the path to this list.

  • sensitiveDataPaths: lists all the sensitive paths and certificates located by the discovery process.

    To upload the certificates to the repository, set the includeSensitiveData field to true:

    # Sensitive data which will be filtered out of the container image.
    # If includeSensitiveData is set to true the sensitive data will be mounted on the container.
    
    includeSensitiveData: true
    tomcatServers:
    - name: latest
      catalinaBase: /opt/tomcat/latest
      catalinaHome: /opt/tomcat/latest
      # Exclude files from migration.
      excludeFiles:
      - /usr/local/ssl/server.pem
      - /usr/home/tomcat/keystore
      - /usr/home/tomcat/truststore
      images:
      - name: tomcat-latest
        ...
        # If set to true, sensitive data specified in sensitiveDataPaths will be uploaded to the artifacts repository.
        sensitiveDataPaths:
        - /usr/local/ssl/server.pem
        - /usr/home/tomcat/keystore
        - /usr/home/tomcat/truststore
    

    After the migration is complete, the secrets are added to the secrets file secrets.yaml in the artifacts repository.

Webapps logging

Migrate to Containers supports logging with log4j v2, logback and log4j v1.x that reside in CATALINA_HOME.

Migrate to Containers creates an additional archive file with modified log configurations and convert all file type appenders to console appenders. You can use the content of this archive as a reference to enable log collection and stream to a log collection solution (such as Google Cloud Logging).

Memory allocation

During the migration process, you can specify the memory limits of applications migrated to individual containers. Edit the memory limits directly in your migration plan using the following format:

tomcatServers:
    - name: latest
      . . .
      images:
        - name: tomcat-latest
          . . .
          resources:
            memory:
              limit: 2048M
              requests: 1280M

The recommended value for limit is 200% of Xmx, which is the maximum Java heap size. The recommended value for requests is 150% of Xmx.

To see the value of Xmx, run the following command:

ps aux | grep catalina

If memory limits have been defined in your migration plan, the Dockerfile that was generated alongside other artifacts after a successful migration reflects your declaration:

FROM tomcat:8.5-jdk11-openjdk

# Add JVM environment variables for tomcat
ENV CATALINA_OPTS="${CATALINA_OPTS} -XX:MaxRAMPercentage=50.0 -XX:InitialRAMPercentage=50.0 -XX:+UseContainerSupport <additional variables>"

This defines the initial and maximum size to be 50% of limit value. This enables the Tomcat Java heap allocation size to change according to any change with the pod memory limit.

Set Tomcat environment variables

If you would like to set CATALINA_OPTS in the Dockerfile that was generated alongside other artifacts after a successful migration, you can first add to the catalinaOpts field in your migration plan. The following example shows an updated catalinaOpts field:

tomcatServers:
    - name: latest
      . . .
      images:
        - name: tomcat-latest
          . . .
          resources:
            . . .
          catalinaOpts: "-Xss10M"

Migrate to Containers parses your catalinaOpts data to your Dockerfile. The following example shows the output of the parsing:

FROM 8.5-jdk11-openjdk-slim

## setenv.sh script detected.
## Modify env variables on the script and add definitions to the migrated
## tomcat server, if needed (less recommended than adding env variables directly
## to CATALINA_OPTS) by uncomment the line below
#ADD --chown=root:root setenv.sh /usr/local/tomcat/bin/setenv.sh

# Add JVM environment variables for the tomcat server
ENV CATALINA_OPTS="${CATALINA_OPTS} -XX:MaxRAMPercentage=50.0 -XX:InitialRAMPercentage=50.0 -Xss10M"

You might also set Tomcat environment variables using the setenv.sh script, which is located in the /bin folder on your Tomcat server. For more information about Tomcat environment variables, see the Tomcat documentation.

If you choose to use setenv.sh for setting your Tomcat environment variables, then you need to edit the Dockerfile.

Set Tomcat health probes

You can monitor the downtime and ready status of your managed containers by specifying probes in your Tomcat web server migration plan. Health probe monitoring can help reduce the downtime of migrated containers and provide better monitoring.

Unknown health states can create availability degradation, false-positive availability monitoring, and potential data loss. Without a health probe, kubelet can only assume the health of a container and might send traffic to a container instance that is not ready. This can cause traffic loss. Kubelet might also not detect containers that are in a frozen state and doesn't restart them.

A health probe functions by running a small scripted statement when the container starts. The script checks for successful conditions, which are defined by the type of probe used, every period. The period is defined in the migration plan by a periodSeconds field. You can run or define these scripts manually.

To learn more about kubelet probes, see Configure Liveness, Readiness and Startup Probes in the Kubernetes documentation.

There are two types of probes available to configure, both probes are probe-v1-core defined in probe-v1-core reference and share the same function as the corresponding fields of container-v1-core

  • Liveness probe: Liveness probes are used to know when to restart a container.

  • Readiness probe: Readiness probes are used to know when a container is ready to start accepting traffic. To start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. A readiness probe might act similarly to a liveness probe, but a readiness probe in the specifications indicates that a Pod starts without receiving any traffic and only start receiving traffic after the probe succeeds.

After discovery, the probe configuration is added to the migration plan. The probes can be used in their default configuration as shown in the following example. To turn off the probes, remove the probes section from the YAML file.

tomcatServers:
- name: latest
  images:
  - name: tomcat-latest
    ports:
    - 8080
    probes:
      livenessProbe:
        tcpSocket:
          port: 8080
      readinessProbe:
        tcpSocket:
          port: 8080

You can change this migration plan to use an existing Tomcat HTTP endpoint.

tomcatServers:
- name: latest
  images:
  - name: tomcat-latest
    ports:
    - 8080
    probes:
      livenessProbe:
       httpGet:
          path: /healthz
          port: 8080
          httpHeaders:
          - name: Custom-Header
            value: Awesome
        initialDelaySeconds: 3
        periodSeconds: 3
      readinessProbe:
        httpGet:
        tcpSocket:
          port: 8080

There are four predefined ways to check a container using a probe. Each probe must define exactly one of these four mechanisms:

  • exec: Executes a specified command inside the container. Execution is considered successful if the exit status code is 0.
  • grpc: Performs a remote procedure call using `gRPC`. `gRPC` probes are an alpha feature.
  • httpGet: Performs an HTTP GET request against the Pod's IP address on a specified port and path. The request is considered successful if the status code is greater than or equal to 200 and less than 400.
  • tcpSocket: Performs a TCP check against the Pod's IP address on a specified port. The diagnostic is considered successful if the port is open.

By default, a migration plan enables the tcpSocket probing method. Use the manual configuration of your migration plan to use a different probing methods. You can also define custom commands and scripts through the migration plan.

To add external dependencies for the readiness probe, while using the default liveness probe, define an exec readiness probe and a script that implements the logic.

Verify Tomcat clustering configuration

Tomcat clustering is used to replicate session information across all Tomcat nodes, which lets you to call any of the backend application servers and not lose client session information. To learn more about clustering configuration, see Clustering/Session Replication How-To in the Tomcat documentation.

Tomcat clustering class is called SimpleTcpListener, which uses multicast heartbeat messages for peer discovery. Cloud environments don't support multicast, so you must change the clustering configuration or remove it, when possible.

When a load balancer is configured to run and maintain a sticky session to the backend Tomcat server, it can use the jvmRoute property configured in the server.xml Engine configuration.

If your source environment is using an unsupported clustering configuration, modify the server.xml file to either disable the configuration, or use a supported configuration.

  • Tomcat v8.5 or above: Clustering must be disabled on Tomcat version 8.5. To disable clustering, you need to comment out the <Cluster … /> section in server.xml.
  • Tomcat v9 or above: In Tomcat version 9 or later, you can enable the Cluster class using KubernetesMembershipService. KubernetesMembershipService is a Kubernetes specific class, which uses the Kubernetes APIs for peer discovery.

    • Kubernetes provider: The following is a sample configuration for a Kubernetes provider:

      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
      <Channel className="org.apache.catalina.tribes.group.GroupChannel">
      <Membership className="org.apache.catalina.tribes.membership.cloud.CloudMembershipService" membershipProviderClassName="org.apache.catalina.tribes.membership.cloud.KubernetesMembershipProvider"/>
      </Channel>
      </Cluster>
      
    • DNS provider: Use the DNSMembershipProvider to use the DNS APIs for peer discovery. The following is a sample configuration for a DNS provider:

      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
      <Channel className="org.apache.catalina.tribes.group.GroupChannel">
      <Membership className="org.apache.catalina.tribes.membership.cloud.CloudMembershipService" membershipProviderClassName="org.apache.catalina.tribes.membership.cloud.DNSMembershipProvider"/>
      </Channel>
      </Cluster>
      
    • jvmRoute: When your load balancer relies on a jvmRoute value, the value should be changed from static to using the POD name. This configures Tomcat to add a suffix to the session cookie with the POD name. This can be used by the frontend load balancer to direct traffic to the correct Tomcat POD. Change the value in the server.xml file to use the following:

      <Engine name="Catalina" defaultHost="localhost" jvmRoute="${HOSTNAME}">
      

Verify Tomcat proxy configuration

If Tomcat is configured to run behind a reverse proxy, or using several proxy configuration settings in the Connector section of server.xml, you must verify that the same proxy configurations are still applicable once migrated to run in Kubernetes.

To run a functional containerized Tomcat application, choose one of the following configuration changes to the reverse proxy configuration:

  • Disable proxy configuration: If the migrated application no longer runs behind a reverse proxy, you can disable proxy configuration by removing proxyName and proxyPort from the connector configuration.
  • Migrate the proxy: Migrate the proxy VM and retain all the existing Tomcat configurations.
  • Configure Ingress to replace the reverse proxy: If no custom or sophisticated logic has been implemented for your reverse proxy, you can configure an Ingress resource to expose your migrated Tomcat application. This process uses the same FQDN that was used prior to the migration. To configure an Ingress, you must verify that your Tomcat Kubernetes Service is of type: Nodeport. The following is a sample configuration of an Ingress:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-tomcat-ingress
    spec:
      rules:
      - host: APP_DOMAIN
        http:
          paths:
          - pathType: ImplementationSpecific
            backend:
              service:
                name: my-tomcat-service
                port:
                  name: my-tomcat-port
    
  • Configure a Cloud Service Mesh with Cloud Load Balancing: If you are using GKE Enterprise, you can choose to expose your application using Cloud Service Mesh. To learn more about exposing service mesh applications, see From edge to mesh: Exposing service mesh applications through GKE Ingress.

Verify Java proxy configuration

When migrating to containers, you must verify the availability of your proxy servers in your new environment. When the proxy server is not available, choose one of the following configuration changes to the proxy:

  • Disable proxying: If the original proxy is no longer in use, then disable the proxy configuration. Remove all the proxy settings from either the setenv.sh script, or from the configuration file where they are maintained on your Tomcat server.
  • Change proxy settings: If your Kubernetes environment is using a different egress proxy, you can change your proxy settings in setenv.sh, or other file, to point at the new proxy.

What's next