Using the migration toolkit for applications command-line interface
Preparing your applications for modernization and migration by using the migration toolkit for applications command-line interface
Abstract
- Making open source more inclusive
- 1. Introduction to the MTA command-line interface
- 2. Supported migration toolkit for applications migration paths
- 3. Analyzing Java applications with MTA command-line interface
- 4. Analyzing applications written in languages other than Java with MTA command-line interface
- 5. Analyzing applications by using profiles from the MTA Hub
- 6. Reviewing an analysis report
- 7. Performing a transformation with the MTA command-line interface
- 8. Generating platform assets for application deployment
- 9. MTA CLI known issues
- A. Reference material
- B. How to contribute to the MTA project
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction to the MTA command-line interface
The migration toolkit for applications (MTA) command-line interface (CLI) provides a comprehensive set of rules to assess the suitability of your applications for containerization and deployment on Red Hat OpenShift. By using the MTA CLI, you can assess and prioritize migration and modernization efforts for applications written in different languages.
MTA CLI provides numerous reports that highlight the analysis without using the other tools. You can use MTA CLI to customize analysis options or integrate with external automation tools.
You can use MTA to analyze applications written in the following languages:
- Java
- Go
- .NET
- Node.js
- Python
Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Analyzing applications written in the Python and Node.js languages is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Chapter 2. Supported migration toolkit for applications migration paths
You can use the migration toolkit for applications (MTA) to assess your applications' suitability for the migration to multiple target platforms. Review the supported migration paths to verify that your planned migration uses a valid combination of source and target technologies. Adhering to these paths helps ensure that MTA can successfully analyze and migrate your applications.
Table 2.1. Supported Java migration paths
| Source platform ⇒ | Migration to JBoss EAP 7 & 8 | OpenShift (cloud readiness) | OpenJDK 11, 17, and 21 | Jakarta EE 9 | Camel 3 & 4 | Spring Boot in Red Hat Runtimes | Quarkus | Open Liberty |
|---|---|---|---|---|---|---|---|---|
|
Oracle WebLogic Server |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
|
IBM WebSphere Application Server |
✔ |
✔ |
✔ |
- |
- |
- |
- |
✔ |
|
JBoss EAP 4 |
✘ [a] |
✔ |
✔ |
- |
- |
- |
- |
- |
|
JBoss EAP 5 |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
|
JBoss EAP 6 |
✔ |
✔ |
✔ |
- |
- |
- |
- |
- |
|
JBoss EAP 7 |
✔ |
✔ |
✔ |
- |
- |
- |
✔ |
-
|
|
Thorntail |
✔ [b] |
- |
- |
- |
- |
- |
- |
- |
|
Oracle JDK |
- |
✔ |
✔ |
- |
- |
- |
- |
- |
|
Camel 2 |
- |
✔ |
✔ |
- |
✔ |
- |
- |
- |
|
Spring Boot |
- |
✔ |
✔ |
✔ |
- |
✔ |
✔ |
- |
|
Any Java application |
- |
✔ |
✔ |
- |
- |
- |
- |
- |
|
Any Java EE application |
- |
- |
- |
✔ |
- |
- |
- |
- |
[a]
Although MTA does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7.
[b]
Requires JBoss EAP expansion pack 2 (JBoss EAP XP 2).
| ||||||||
..NET migration paths
| Source platform ⇒ | OpenShift (cloud readiness) | Migration to .NET 8.0 |
|---|---|---|
|
.NET Framework 4.5+ (Windows only) |
✔ |
✔ |
Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Additional resources
Chapter 3. Analyzing Java applications with MTA command-line interface
To assess and prioritize migration and modernization efforts for applications written in different languages, analyze your applications by using the migration toolkit for applications (MTA) CLI.
MTA CLI supports running source code and binary analysis by using analyzer-lsp. analyzer-lsp is a tool that evaluates rules by using language providers.
Depending on your scenario, you can use the MTA CLI to perform the following actions:
- Run the analysis against a single application.
Run the analysis against multiple applications:
-
In MTA versions earlier than 7.1.0, you can enter a series of
--analyzecommands, each against an application and each generating a separate report. -
In MTA version 7.1.0 and later, you can use the
--bulkoption to analyze multiple applications at once and generate a single report. Note that this feature is a Developer Preview feature only.
-
In MTA versions earlier than 7.1.0, you can enter a series of
Run the analysis for Java applications in the containerless mode. Note that this option is set by default and is used automatically only if all requirements are met.
However, if you want to analyze applications in languages other than Java or, for example, use transformation commands, you still need to use containers.
The analysis output in the disconnected environment usually results in fewer incidents because a dependency analysis does not run accurately without access to Maven.
3.1. Analyzing a single application
To assess the effort required to migrate a specific application, you can analyze this application individually by using the migration toolkit for applications (MTA) command-line interface (CLI).
The analysis generates a report containing the details of the application migration effort, including potential migration issues. You can use the report to prioritize tasks and estimate the resources needed for the migration.
Extracting the list of dependencies from compiled Java binaries is not always possible during the analysis, especially if the dependencies are not embedded within the binary.
Procedure
Optional: List available target technologies for an analysis:
$ mta-cli analyze --list-targetsRun the analysis:
$ mta-cli analyze --input <path_to_input> --output <path_to_output> --source <source_name> --target <target_name>Specify the following arguments:
-
--input: An application to be evaluated. --output: An output directory for the generated reports.mta-cli analyzecreates the following analysis reports:-
analysis.log -
dependencies.yaml -
output.yaml -
shim.log -
static-report -
static-report.log
-
-
--source: A source technology for the application migration, for example,weblogic. -
--target: A target technology for the application migration, for example,eap8.
-
Access the generated analysis report:
In the output of the
mta-cli analyzecommand, copy a path to theindex.htmlanalysis report file:Report created: <output_report_directory>/index.html Access it at this URL: file:///<output_report_directory>/index.html
- Paste the path to the browser of your choice.
Alternatively, press Ctrl and click on the path to the report file.
Additional resources
3.2. Analyzing multiple applications
To efficiently assess the migration effort for a portfolio of applications, you can analyze multiple applications simultaneously by using the migration toolkit for applications (MTA) command-line interface (CLI). This bulk analysis generates reports for all specified applications. You can use the reports to identify issues and prioritize tasks for large-scale migration projects.
Analyzing multiple applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Procedure
Run the analysis for multiple applications.
ImportantYou must enter one input per analyze command, but make sure to enter the same output directory for all inputs.
For example, to analyze example applications
A,B, andC, enter the following commands:For input
A, enter:$ mta-cli analyze --bulk --input <path_to_input_A> --output <path_to_output_ABC> --source <source_A> --target <target_A>For input
B, enter:$ mta-cli analyze --bulk --input <path_to_input_B> --output <path_to_output_ABC> --source <source_B> --target <target_B>For input
C, enter:$ mta-cli analyze --bulk --input <path_to_input_C> --output <path_to_output_ABC> --source <source_C> --target <target_C>
- Access the analysis report. MTA generates a single report, listing all issues that must be resolved before the applications can be migrated.
Additional resources
3.3. Analyzing an application in containerless mode
To analyze applications in environments where a container engine is not available or desired, you can run the migration toolkit for applications (MTA) command-line interface (CLI) in containerless mode. This way, you can execute assessments directly on your local machine by using the provided shell script, bypassing the need for Podman or Docker.
In MTA 7.2.0 and later, containerless CLI is a default mode. To enable container runtime usage for the analysis of Java applications, you must set the --run-local flag to false:
--run-local=false
The analysis for other applications runs in the container mode automatically.
Prerequisites
- You installed the MTA CLI. For more information, see Installing the CLI by using a .zip file.
- You installed Java Development Kit (JDK) version 17 or later.
-
If you use OpenJDK on Red Hat Enterprise Linux (RHEL) or Fedora, you installed the Java
develpackage. - You installed Maven version 3.9.9 or later.
The CLI assumes that a path to the
mvnbinary is correctly registered in the system variable. Therefore, ensure that you addedmvnto the following variable:-
Pathfor Windows. -
PATHfor Linux and macOS.
-
-
You set the
JAVA_HOMEenvironmental variable. You set the
JVM_MAX_MEMsystem variable.NoteIf you do not set
JVM_MAX_MEM, the analysis might hang because Java might require more memory than the defaultJVM_MAX_MEMvalue.For Gradle analysis:
- You installed OpenJDK version 8.
-
You set
$JAVA8_HOMEand it is pointing to the OpenJDK 8 home directory. - Your project has a Gradle wrapper.
Procedure
Optional: Display all
mta-cli analyzecommand options:$ mta-cli analyze --helpRun the application analysis:
$ mta-cli analyze --overwrite --input <path_to_input> --output <path_to_output> --target <target_source>NoteThe
--overwriteoption overwrites the output folder if it exists.
Additional resources
3.4. The analyze command options
You can customize the application analysis process with migration toolkit for applications (MTA) command-line interface (CLI) by using the mta-cli analyze command options. You can use these options to specify input sources, define target migration paths, configure output directories, and perform other adjustments.
Table 3.1. mta-cli analyze command options
| Option | Description |
|---|---|
|
|
Analyze open-source libraries. |
|
|
Set When you disable Maven search, MTA at first tries to determine dependencies from the JAR file’s POM file, if any. If this method does not succeed, MTA goes through the directory structure to determine dependencies. This method may not produce a reliable dependency classification since the package structure can differ from what is expected by MTA. You may see more number of incidents because some dependencies may be wrongly classified as internal.
By default, |
|
|
The number of lines of source code to include in the output for each incident. The default is 100. |
|
|
A directory for dependencies. |
|
|
Run default rulesets with analysis. The default is |
|
|
Display the available flags for the |
|
|
An HTTP proxy string URL. |
|
|
An HTTPS proxy string URL. |
|
|
An expression to select incidents based on custom variables, for example: !package=io.demo.config-utils |
|
|
A path to the application source code or a binary. |
|
|
A Jaeger endpoint to collect traces. |
|
|
Create analysis and dependence output as a JSON file. |
|
|
Run rules based on specified label selector expression. |
|
|
List all languages in the source application. This flag is not supported for binary applications. |
|
|
List available supported providers. |
|
|
List rules for available migration sources. |
|
|
List rules for available migration targets. |
|
|
A path to the custom Maven settings file to use. |
|
|
An analysis mode. Must be set to either of the following values:
|
|
|
Proxy-excluded URLs (relevant only with proxy). |
|
|
A path to the directory for analysis output. |
|
|
Overwrite the output directory. |
|
|
A filename or directory that contains rule files. |
|
|
Enable or disable container runtime usage for Java applications. For example, to enable container runtime, set |
|
|
Do not generate the static report. |
|
|
A source technology to consider for the analysis. To specify multiple sources, repeat the parameter, for example: --source <source_1> --source <source_2> ... |
|
|
A target technology to consider for the analysis. To specify multiple targets, repeat the parameter, for example: --target <target_1> --target <target_2> ... |
|
|
A log level. The default is 4. |
|
|
Do not clean up temporary resources. |
Chapter 4. Analyzing applications written in languages other than Java with MTA command-line interface
To assess the migration effort for applications written in languages other than Java, you can analyze your source code by using the migration toolkit for applications (MTA) command-line interface (CLI). The analysis identifies dependencies and potential migration issues, helping you prioritize tasks and estimate the resources required to modernize your non-Java portfolio.
You can perform the analysis either of the following ways:
- Select a supported language provider to run the analysis for.
- Overwrite the existing supported language provider with your own unsupported language provider, and then run the analysis on it.
Analyzing applications written in languages other than Java is only possible in container mode. You can use the containerless CLI only for Java applications.
4.1. Analyzing an application for the selected supported language provider
To assess the effort required to migrate non-Java applications, analyze your source code by using the migration toolkit for applications (MTA) command-line interface (CLI).
You can select a language provider to analyze from the list of supported providers. The analysis generates a report containing the details of the application migration effort, including potential migration issues. You can use the report to prioritize tasks and estimate the resources needed for the migration.
Prerequisites
- You have the latest version of MTA CLI installed on your system.
Procedure
List language providers supported for the analysis:
$ mta-cli analyze --list-providersRun the application analysis for the selected language provider:
$ mta-cli analyze --input <path_to_input> --output <path_to_output> --provider <language_provider> --rules <path_to_custom_rules>ImportantNote that if you do not set the
--provideroption, the analysis might fail because it detects unsupported providers. The analysis will complete without--provideronly if all discovered providers are supported.
4.2. Analyzing an application for an unsupported language provider
To estimate the migration effort for applications built with technologies not natively supported by the migration toolkit for applications (MTA) command-line interface, analyze the source code. The analysis generates a report containing the details of the application migration effort, including potential migration issues. You can use the report to prioritize tasks and estimate the resources needed for the migration.
To run the analysis for an unsupported language provider, you must overwrite the existing supported language provider with your own unsupported language provider.
You must create a configuration file for your unsupported language provider before overriding the supported provider.
Prerequisites
You created a configuration file for your unsupported language provider, for example:
[ { "name": "java", "address": "localhost:14651" "initConfig": [{ "location": "<java-app-path>", "providerSpecificConfig": { "bundles": "<bundle-path>", "jvmMaxMem": "2G", }, "analysisMode": "source-only" }] } ]
Procedure
Override an existing supported language provider with your unsupported provider and run the analysis:
$ mta-cli analyze --provider-override <path_to_configuration_file> --output <path_to_output> --rules <path_to_custom_rules>
4.3. Additional resources
Chapter 5. Analyzing applications by using profiles from the MTA Hub
As Migrators, you can use profiles to run an application analysis in the MTA CLI. Profiles have standardized analysis configuration, scope of an analysis, and custom rules that you can reuse for mutiple analyses of a local application.
Before you run an analysis, you can use the MTA CLI to download the latest configuration through a secure or insecure connection to the Hub.
The MTA CLI syncs with the Hub to download the following files:
-
Profiles in the
.konveyor/profilesdirectory in your local application -
Custom rules in the
.konveyor/profiles/<profile-name>/rulesdirectory
To perform an analysis that you run after syncing with the Hub, MTA CLI uses the downloaded profile and custom rule configurations.
Analysis Profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
5.1. Analyzing an application by using a profile
Migrators can use analysis profiles and custom rules downloaded from the MTA Hub to run one or more analyses. You can override the value of a field in the analysis profile by entering an alternative value for the field in the analysis command. If you want to run an analysis without using the downloaded profile, delete the .konveyor directory within your application where MTA downloaded the profiles and rules.
Prerequisites
-
Your administrator installed MTA and the
tackleoperator in your cluster. - The architect configured an analysis profile with required custom rules in the MTA user interface.
- The architect added the analysis profile in the application’s target profile in the MTA user interface.
- The application you want to sync is available in your local system.
Procedure
Log in to the MTA Hub with the following command.
$ mta-cli config login
Enter the
Host,Username, andPasswordto connect to the Hub.$ mta-cli config login Host: https://mta-namespace.apps.cluster.example.com/hub Username: <my_user_name> Password: <my_password>
Note-
The
Hostis thetackleURL of MTA deployed in your cluster that exposes thehubendpoint. -
The
UsernameandPasswordare thekeycloakcredentials that you use to log in to MTA in your cluster.
-
The
Sync the application with its remote repository and download the profile and custom rules files from the Hub.
$ mta-cli config sync --url https://github.com/<my-app-repository> --application-path <path/to/my-app> --insecureNoteMTA downloads the profile and custom rule files in the
.konveyor/profiles/path in your local application.Check the path where MTA downloaded the configuration:
$ mta-cli config list --profile-dir <path/to/my_app>Run an analysis by using the downloaded profile configuration.
$ mta-cli analyze -i <path/to/my_app> -o <path/to/my_app_report> --overwrite --mode source-only
TipFor Java applications, you can use the
fullmode while running an analysis. For non-Java applications, the analysis mode must besource-only.NoteThe following command overrides the values of
targetin the analysis profile withquarkus.mta-cli analyze -i <path/to/my_app> -o <path/to/my_app_report> --overwrite --target quarkus --mode source-only
Additional resources
5.2. The config command options
You can connect to the migration toolkit for applications (MTA) Hub and download profile bundles from the command-line interface (CLI) by using the mta-cli config command options.
Table 5.1. mta-cli config command options
| Option | Flag | Description |
|---|---|---|
|
|
Connect to the MTA Hub. | |
|
|
Skip TLS certificate verification. | |
|
|
Lists the profiles downloaded and stored locally in the application. | |
|
|
Specify the path to the application in your system to find the path where MTA downloaded the profile directory. | |
|
|
Use the command to enter the
The | |
|
|
Sync and download the application profile currently stored in the MTA Hub. | |
|
|
Specify the URL of the application repository. |
5.3. A sample profile configuration
After Migrators download the analysis profile from the Hub, they can modify the configuration before running an analysis.
When you sync the profile, the MTA CLI overrides the modified configuration by downloading the latest configuration from the Hub to your local application.
id: 1
createUser: admin
createTime: 2025-11-12T23:57:35
name: Profile-1
mode:
withDeps: true
scope:
withKnownLibs: true
packages:
included:
- one
- two
excluded:
- three
- four
rules:
targets:
- id: 1
name: Application server migration
- id: 2
name: Containerization
labels:
included:
- konveyor.io/target=spring6
- konveyor.io/source=springboot
- konveyor.io/target=quarkus
excluded:
- C
- D
files:
- id: 400
name: ""
repository:
kind: git
url: <url>
branch: ""
tag: ""
path: default/generatedwhere
id- specified the unique identifier of the analysis profile.
createUser- specifies the user who created the analysis profile in the Hub.
createTime- specifies the date and time when the user created the profile.
name- specifies the name of the analysis profile.
withDeps- specifies if MTA includes dependencies in the analysis as a boolean value.
withKnownLibs- specifies if MTA includes known open-source libraries in the analysis.
packages- specifies the packages that are included and excluded in the analysis.
targets- specifies a unique id and name of each migration target in the analysis profile.
labels- specifies the labels that filters rules that are applied and excluded in the analysis.
files- specifies the unique identifier and name of a custom rule in a file. MTA CLI uses the file directly in an analysis.
repository-
specifies details of external ruleset repository. For example,
kind(GitorSubversion),URL,branch,tag, andpathwithin the repository to the rulesets.
Chapter 6. Reviewing an analysis report
To assess application portability and estimate the effort required for modernization, review the analysis reports generated by the migration toolkit for applications (MTA) command-line interface (CLI). These reports provide detailed insights into dependencies and potential migration issues, helping you prioritize tasks and verify your migration path.
6.1. Accessing an analysis report
To view the results of the source code analysis, access the HTML report generated by the migration toolkit for applications (MTA) command-line interface (CLI). The report includes details on application dependencies and potential migration issues, helping you evaluate the scope of the modernization effort.
You can access the analysis report from the output directory that you specified by using the --output option in the command line.
Procedure
Copy the path of the
index.htmlfile from the analysis output and paste it in a browser of your choice:Report created: <output_report_directory>/index.html Access it at this URL: file:///<output_report_directory>/index.html
Alternatively, press Ctrl and click on the path of the
index.htmlfile.
6.2. Analysis report sections
The migration toolkit for applications (MTA) command-line interface (CLI) analysis report includes distinct sections that organize the report details by severity, technology, and application details. Check the following table to understand the purpose of each report section to help you navigate the report and interpret specific migration data.
You can only review the report applicable to the current application.
Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Table 6.1. Analysis report sections
| Section | Description |
|---|---|
|
Dashboard |
An overview of the incidents and total story points, sorted by category. |
|
Issues |
A concise summary of all issues and their details that require attention. |
|
Dependencies |
All Java-packaged dependencies found within the application. |
|
Technologies |
All embedded libraries grouped by functionality. Use this report to display the technologies used in each application. |
|
Insights |
Information about a violation generated by a rule with zero effort. Issues are generated by general rules, whereas string tags are generated by the tagging rules. String tags indicate the presence of a technology but do not show the code location. Insights contain information about the technologies used in the application and their usage in the code. Insights do not impact the migration. For example, a rule searching for deprecated API usage in the code that does not impact the current migration but can be tracked and fixed when needed in the future. Unlike with issues, you do not need to fix insights for a successful migration. They are generated by any rule that does not have a positive effort value and category assigned. They might have a message and tag. |
6.3. Reviewing the analysis issues and incidents
To identify specific code segments that require modification, review the issues and incidents provided in the migration toolkit for applications (MTA) command-lie interface analysis report. You can use these details to locate migration blockers within your source code and prioritize remediation tasks based on severity.
Each issue contains a list of files where a rule matched one or more times. These files include all the incidents within the issue. Each incident contains a detailed explanation of the issue and how to fix this issue.
Procedure
- Open the analysis report. For more information, see Accessing an analysis report.
- Click Issues.
- Click on the issue you want to check.
- Under the File tab, click on a file to display an incident or incidents that triggered the issue.
Display the incident message by hovering over the line that triggered the incident, for example:
Use the Quarkus Maven plugin adding the following sections to the pom.xml file: <properties> <quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version>3.1.0.Final</quarkus.platform.version> </properties> <build> <plugins> <plugin> <groupId>$</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>$</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
Chapter 7. Performing a transformation with the MTA command-line interface
To accelerate your application modernization, use the migration toolkit for applications (MTA) command-line interface (CLI) to transform your source code. The transformation process applies predefined migration rules to your codebase, reducing the manual effort required to update Java libraries or frameworks.
Performing transformation requires the container runtime to be configured.
7.1. Transforming applications source code
To update Java libraries or frameworks, for example, javax or Spring Boot, you can transform Java application source code by using the transform openrewrite command. The openrewrite subcommand allows running OpenRewrite recipes on source code. Transformation applies predefined migration rules to your codebase, reducing the manual effort required to update Java libraries or frameworks.
You can only use a single target to run the transform overwrite command.
Prerequisites
- You configured the container runtime.
Procedure
Display the available
OpenRewriterecipes:$ mta-cli transform openrewrite --list-targetsTransform the application source code:
$ mta-cli transform openrewrite --input=<path_to_source_code> --target=<target_from_the_list>
Verification
-
Inspect the target application source code
diffto see the transformation.
Additional resources
7.2. Available OpenRewrite recipes
The migration toolkit for applications (MTA) command-line interface (CLI) provides OpenRewrite recipes that define specific code transformation logic. Check the following table to view the available recipes that you can use for transforming application source code.
Table 7.1. Available OpenRewrite recipes
| Migration path | Purpose | The rewrite.config file location | Active recipes |
|---|---|---|---|
|
Java EE to Jakarta EE |
Replace import of
Replace |
|
|
|
Java EE to Jakarta EE |
Rename bootstrapping files. |
|
|
|
Java EE to Jakarta EE |
Transform the |
|
|
|
Spring Boot to Quarkus |
Replace |
|
|
7.3. The openrewrite command options
The migration toolkit for applications (MTA) command-line interface (CLI) provides the mta-cli transform openrewrite command options to customize the transformation of your application source code. Check the following table to identify the available arguments for applying specific migration recipes and controlling the output of the transformation process.
Table 7.2. The mta-cli transform openrewrite command options
| Option | Description |
|---|---|
|
|
A target goal. The default is |
|
|
Display all |
|
|
A path to the application source code directory. |
|
|
List all available OpenRewrite recipes. |
|
|
A path to a custom Maven settings file. |
|
|
A target OpenRewrite recipe. |
|
|
A log level. The default is |
|
|
Do not clean up temporary resources. |
Chapter 8. Generating platform assets for application deployment
To enable offline review and simplify the distribution of assessment results, you can generate static assets by using the migration toolkit for applications (MTA) command-line interface. This process produces standalone reports and data files, allowing you to share migration insights with stakeholders without requiring access to the analysis environment.
You can use the discover and generate commands in containerless mode to automatically generate the manifests needed to deploy a Cloud Foundry (CF) application in the OpenShift Container Platform:
Use the
discovercommand to generate the discovery manifest in the YAML format directly from a CF instance or from either of the following manifest files:- A single application manifest
- A CF manifest
- A path to the directory with multiple manifest files, for example, with application manifests, CF manifests, or both of these manifest types.
The discovery manifest preserves the specifications found in the CF manifest. The specifications define the metadata, runtime, and platform configurations.
-
Use the
generatecommand to generate the deployment manifest for OCP deployments by using the discovery manifest. The deployment manifest is generated by using a templating engine, such as Helm, that converts the discovery manifest into a Kubernetes-native format. You can also use this command to generate non-Kubernetes manifests, such as a Dockerfile or a configuration file.
Generating platform assets for application deployment is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Generating deployment assets has the following benefits:
- Generating the Kubernetes and non-Kubernetes deployment manifests.
- Generating deployment manifests by using familiar template engines, for example, Helm, that are widely used for Kubernetes deployments.
- Adhering to Kubernetes best practices when preparing the deployment manifest by using Helm templates.
8.1. Generating a discovery manifest
To validate your application inventory before performing a full analysis, you can generate a discovery manifest by using the migration toolkit for applications (MTA) command-line interface (CLI). The discovery process creates a list of applications found within a specified path, allowing you to verify the scope of your assessment.
You can generate the discovery manifest for the Cloud Foundry (CF) application by using the discover command. The discovery manifest preserves configurations, such as application properties, resource allocations, environment variables, and service bindings found in the CF manifest.
Prerequisites
- You have Cloud Foundry (v3) as a source platform.
- You installed MTA CLI version 7.3.0 or later.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/directory. List the supported platforms for the discovery process:
$ mta-cli discover --list-platformsGenerate the discovery manifest:
$ mta-cli discover cloud-foundry --input <path_to_input> --output-dir <path_to_output-directory>
Additional resources
8.2. Performing a live discovery in a remote CF instance
You can perform a live discovery to determine what is deployed in a certain Cloud Foundry (CF) cluster by using the migration toolkit for applications (MTA) command-line interface (CLI). For example, you can determine how many applications are in the cluster. You can also use the live discovery if you do not have access to manifest YAML files.
While discovering the applications in CF, you can filter and list the applications in CF clusters before you import them in MTA. Enter a list of comma-separated values by using the --orgs and --spaces flags. Use these flags with --list-apps to list the applications.
When you use the --orgs and --spaces flags with the discover command but without the --list-apps flag, MTA discovers the applications but does not list them.
MTA discovers the applications in the following scenarios:
-
Organization without spaces: if you use only the
--orgsflag, MTA discovers all applications deployed in all spaces in the organizations that you entered. You can specify one or more organizations as comma-separated values. - Spaces common across organizations: you can specify one or more space names available in multiple organizations to filter applications deployed in the spaces.
- A specific application: You can discover a specific application deployed in a specific space in a particular organization.
-
Applications across spaces in an organization: You can explore applications that have the same name but deployed in different spaces by specifying the organization and the application name while leaving out the
--spacesoption. -
Save manifests: Use the
--out-dirflag to specify the location to which MTA saves manifests of the discovered application. - Invalid entries: If you enter an invalid space name or an organization that does not exist, MTA stores a log about skipping the space or the organization and continues to discover applications in other valid spaces and organizations.
You can run the live discovery for a remote CF instance by using the `mta-cli discover cloud-foundry --use-live-connection --orgs=<org_name> --spaces=<space_name> --app-name=<application_name> ` command.
You must enter at least one Cloud Foundry organization by using the --orgs option to perform live discovery.
Prerequisites
- You have permission to remotely connect to the CF instance.
Procedure
Optional: Investigate the contents of the remote CF instance
$ cf spaces $ cf apps
Copy the CF configuration file to the directory of your choice:
$ mkdir <path_to_the_directory>/.cfRun the live discovery in a remote CF instance:
$ mta-cli discover cloud-foundry --use-live-connection --orgs=<org_name> --spaces=<space_name> --output-dir <path_to_output_directory> --cf-config=<path_to_CF_config_file>The command runs the discovery for all applications from each space in the specified organization.
If you want to run the discovery for a specific application, enter, for example:
$ mta-cli discover cloud-foundry --use-live-connection --orgs=<org_name> --spaces=<space_name> --app-name=<application_name> --output-dir <path_to_output_directory> --cf-config=<path_to_CF_config_file>
8.3. Concealing sensitive information in a discovery manifest
To prevent the exposure of confidential data, such as services and docker credentials, configure the migration toolkit for applications (MTA) command-line interface (CLI) to conceal sensitive details in a Cloud Foundry (CF) discovery manifest. This way, you can safely share the results with stakeholders while maintaining security compliance.
You can conceal sensitive information by using the mta-cli discover cloud-foundry --conceal-sensitive-data command. This command generates the following files:
- A discovery manifest
- A file with concealed data
If you do not specify the --conceal-sensitive-data option, the option is automatically set to false.
Procedure
Display the contents of the CF manifest and locate sensitive data:
$ cat <manifest_name>.yaml name: <manifest_name> disk_quota: 512M memory: 500M timeout: 10 docker: image:myregistry/myapp:latest username: docker-registry-user
Generate the discovery manifest for the CF application as an output file and conceal sensitive data:
$ mta-cli discover cloud-foundry --conceal-sensitive-data=true --input <path_to_application_manifest> --output-dir <path_to_output_directory>
Verification
Display the repository structure:
$ tree <path_to_discovery_manifest> <path_to_discovery_manifest> ├── discover_manifest_<app-name>.yaml ├── secrets_<discovery_manifest_name>.yaml 1 directory, 2 files
Display the contents of the discovery manifest:
$ cat <discovery_manifest_name>.yaml name: <discovery_manifest_name> timeout: 10 docker: image:myregistry/myapp:latest username: $(f0e9ea9e-1913-446f-8483-da9301373eef) disk: 512M memory: 500M instances: 1
The sensitive data was replaced with a UUID (Universally Unique Identifier).
Display the contents of the
secrets_<discovery_manifest_name>.yamlfile:$ cat secrets_<discovery_manifest_name>.yaml f0e9ea9e-1913-446f-8483-da9301373eef: docker-registry-userThe file contains the mapping of the UUID to the concealed sensitive data.
8.4. Generating a deployment manifest
To prepare your application for deployment to a container platform, generate a deployment manifest by using the migration toolkit for applications (MTA) command-line interface (CLI). Generating the deployment manifest creates the necessary configuration files to define your application’s resources, streamlining the transition to your target environment.
You can auto-generate the Red Hat OpenShift Container Platform deployment manifest for the Cloud Foundry (CF) application by using the generate command. Based on the Helm template that you provide, the command generates manifests, such as a ConfigMap, and non-Kubernetes manifests, such as a Dockerfile, for application deployment.
Prerequisites
- You have Cloud Foundry (v3) as a source platform.
- You have OpenShift Container Platform as a target platform.
- You installed MTA CLI version 7.3.0.
- You generated a discovery manifest.
- You created a Helm template with the required configuration for the OCP deployment.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/directory. Generate the deployment manifest as an output file:
$ mta-cli generate helm --chart-dir helm_sample \ --input <path_to_discovery-manifest> \ --output-dir <location_of_deployment_manifest> \Verify the ConfigMap:
$ mta-cli cd <location_of_deployment_manifest> \ $ cat configmap.yaml $ cat Dockerfile
Verify the Dockerfile:
$ mta-cli cd <location_of_deployment_manifest> \ $ cat Dockerfile
Additional resources
8.5. Assets generation example
You can use the migration toolkit for applications (MTA) command-line interface to generate static assets for offline review. Use the following example to validate the command syntax and understand the expected output structure.
The following example includes generating discovery and deployment manifests of a Cloud Foundry (CF) Node.js application. For this example, the following files and directories are used:
-
CF Node.js application manifest name:
cf-nodejs-app.yaml -
Discovery manifest name:
discover.yaml -
Location of the application Helm chart:
helm_sample - Deployment manifests: a ConfigMap and a Dockerfile
-
Output location of the deployment manifests:
newDir
Assume that the cf-nodejs-app.yaml is located in the same directory as the MTA CLI binary. If the CF application manifest location is different, you can also enter the location path to the manifest as the input.
Prerequisites
- You installed MTA CLI 7.3.0.
- You have a CF application manifest as a YAML file.
- You created a Helm template with the required configurations for the OCP deployment.
Procedure
-
Open the terminal application and navigate to the
<MTA_HOME>/directory. Verify the content of the CF Node.js application manifest:
$ cat cf-nodejs-app.yaml name: cf-nodejs lifecycle: cnb buildpacks: - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace memory: 512M instances: 1 random-route: trueGenerate the discovery manifest:
$ mta-cli discover cloud-foundry \ --input cf-nodejs-app.yaml \ --output discover.yaml \Verify the content of the discover manifest:
$ cat discover.yaml name: cf-nodejs randomRoute: true timeout: 60 buildPacks: - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace instances: 1Generate the deployment manifest in the
newDirdirectory by using thediscover.yamlfile:$ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --output-dir newDirCheck the contents of the Dockerfile in the
newDirdirectory:$ cat ./newDir/Dockerfile FROM busybox:latest RUN echo "Hello cf-nodejs!"Check the contents of the ConfigMap in the
newDirdirectory:$ cat ./newDir/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: cf-nodejs-config data: RANDOM_ROUTE: true TIMEOUT: "60" BUILD_PACKS: | - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace INSTANCES: "1"In the ConfigMap, override the
nametonodejs-appandINSTANCESto2:$ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --set name="nodejs-app" \ --set instances=2 \ --output-dir newDir \Check the contents of the ConfigMap again:
$ cat ./newDir/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: nodejs-app data: RANDOM_ROUTE: true TIMEOUT: "60" BUILD_PACKS: | - docker://my-registry-a.corp/nodejs - docker://my-registry-b.corp/dynatrace INSTANCES: "2"
Additional resources
8.6. The discover and generate command options
The migration toolkit for applications (MTA) command-line interface (CLI) provides options that you can use together with the discover or generate commands to customize the discovery of applications and the generation of static assets. Check the following table to identify the available arguments for defining input paths, output locations, and the scope of your assessment.
Table 8.1. Options for discover and generate commands
| Command | Option | Description |
|---|---|---|
|
|
|
An application to run the discovery for. |
|
|
Display details for different command arguments. | |
|
|
List the available applications on the source platform, for example: $ mta-cli discover cloud-foundry --use-live-connection --spaces=space,space-2 --cf-config=/home/gloria/ --list-apps
INFO[0000] Cloud Foundry client created successfully
INFO[0000] Analyzing space space_name=space
INFO[0006] Apps discovered count=2
INFO[0006] Analyzing space space_name=space-2
INFO[0007] Apps discovered count=1
Space: space
- nginx
- test-app
Space: space-2
- test-app
| |
|
|
List the supported platforms for the discovery process. | |
|
|
Set the log level, for example, | |
|
|
Discover Cloud Foundry applications. | |
|
|
Extract sensitive information from a discovery manifest and put it into a separate file. | |
|
|
Specify the location of the YAML manifest file to discover the CF applications, for example:
| |
|
|
Specify the location to save the <discovery-manifest-name>.yaml file. | |
|
|
A comma-separated list of Cloud Foundry spaces to analyze during a live discovery, for example: --spaces=space1,space2,… | |
|
|
Enable real-time discovery by using live platform connections. | |
|
|
A comma-separated list of Cloud Foundry organizations that you specify to discover application manifests during a live discovery. For example, --orgs=org1,org2 | |
|
|
|
Display details for different command arguments. |
|
|
Generate a deployment manifest by using the Helm template. | |
|
|
Specify a directory that contains the Helm chart. | |
|
|
Specify a location of the <discovery-manifest-name>.yaml file to generate the deployment manifest. | |
|
|
Generate only non-Kubernetes templates, such as a Dockerfile. | |
|
|
Specify a location to which the deployment manifests are saved. | |
|
|
Override values of attributes in the discovery manifest with the key-value pair entered from the CLI. |
Chapter 9. MTA CLI known issues
Review the known issues for the migration toolkit for applications (MTA) command-line interface (CLI) to identify existing problems and apply workarounds.
Limitations with Podman on Microsoft Windows
The CLI is built and distributed with support for Microsoft Windows.
However, when running any container image based on Red Hat Enterprise Linux 9 (RHEL9) or Universal Base Image 9 (UBI9), the following error can be returned when starting the container:
Fatal glibc error: CPU does not support x86-64-v2
This error is caused because Red Hat Enterprise Linux 9 or Universal Base Image 9 container images must be run on a CPU architecture that supports x86-64-v2.
For more details, see Running Red Hat Enterprise Linux 9 (RHEL) or Universal Base Image (UBI) 9 container images fail with "Fatal glibc error: CPU does not support x86-64-v2".
CLI runs the container runtime correctly. However, different container runtime configurations are not supported.
Although unsupported, you can run CLI with Docker instead of Podman, which would resolve this issue.
To achieve this, you replace the CONTAINER_TOOL path with the path to Docker.
For example, if you experience this issue, instead of issuing:
CONTAINER_TOOL=/usr/local/bin/docker mta-cli analyze
You replace CONTAINER_TOOL with the path to Docker:
<Docker Root Dir>=/usr/local/bin/docker mta-cli analyze
While this is not supported, it would allow you to explore CLI while you work to upgrade your hardware or move to hardware that supports x86_64-v2.
Appendix A. Reference material
View the migration toolkit for applications (MTA) command-line interface (CLI) reference resources that might help you when using the CLI. The resources include information about technology tags and rule story points.
A.1. Supported technology tags
View the migration toolkit for applications (MTA) technology tags supported by the MTA command-line interface (CLI). You can use these tags to classify applications and identify application information, for example, technologies used within the application.
- 0MQ Client
- 3scale
- Acegi Security
- AcrIS Security
- ActiveMQ library
- Airframe
- Airlift Log Manager
- AKKA JTA
- Akka Testkit
- Amazon SQS Client
- AMQP Client
- Anakia
- AngularFaces
- ANTLR StringTemplate
- AOP Alliance
- Apache Accumulo Client
- Apache Aries
- Apache Commons JCS
- Apache Commons Validator
- Apache Flume
- Apache Geronimo
- Apache Hadoop
- Apache HBase Client
- Apache Ignite
- Apache Karaf
- Apache Mahout
- Apache Meecrowave JTA
- Apache Sirona JTA
- Apache Synapse
- Apache Tapestry
- Apiman
- Applet
- Arquillian
- AspectJ
- Atomikos JTA
- Avalon Logkit
- Axion Driver
- Axis
- Axis2
- BabbageFaces
- Bean Validation
- BeanInject
- Blaze
- Blitz4j
- BootsFaces
- Bouncy Castle
- ButterFaces
- Cache API
- Cactus
- Camel
- Camel Messaging Client
- Camunda
- Cassandra Client
- CDI
- Cfg Engine
- Chunk Templates
- Cloudera
- Coherence
- Common Annotations
- Composite Logging
- Composite Logging JCL
- Concordion
- CSS
- Cucumber
- Dagger
- DbUnit
- Demoiselle JTA
- Derby Driver
- Drools
- DVSL
- Dynacache
- EAR Deployment
- Easy Rules
- EasyMock
- Eclipse RCP
- EclipseLink
- Ehcache
- EJB
- EJB XML
- Elasticsearch
- Entity Bean
- EtlUnit
- Eureka
- Everit JTA
- Evo JTA
- Feign
- File system Logging
- FormLayoutMaker
- FreeMarker
- Geronimo JTA
- GFC Logging
- GIN
- GlassFish JTA
- Google Guice
- Grails
- Grapht DI
- Guava Testing
- GWT
- H2 Driver
- Hamcrest
- Handlebars
- HavaRunner
- Hazelcast
- Hdiv
- Hibernate
- Hibernate Cfg
- Hibernate Mapping
- Hibernate OGM
- HighFaces
- HornetQ Client
- HSQLDB Driver
- HTTP Client
- HttpUnit
- ICEfaces
- Ickenham
- Ignite JTA
- Ikasan
- iLog
- Infinispan
- Injekt for Kotlin
- Iroh
- Istio
- Jamon
- Jasypt
- Java EE Batch
- Java EE Batch API
- Java EE JACC
- Java EE JAXB
- Java EE JAXR
- Java EE JSON-P
- Java Transaction API
- JavaFX
- JavaScript
- Javax Inject
- JAX-RS
- JAX-WS
- JayWire
- JBehave
- JBoss Cache
- JBoss EJB XML
- JBoss logging
- JBoss Transactions
- JBoss Web XML
- JBossMQ Client
- JBPM
- JCA
- Jcabi Log
- JCache
- JCunit
- JDBC
- JDBC datasources
- JDBC XA datasources
- Jersey
- Jetbrick Template
- Jetty
- JFreeChart
- JFunk
- JGoodies
- JMock
- JMockit
- JMS Connection Factory
- JMS Queue
- JMS Topic
- JMustache
- JNA
- JNI
- JNLP
- JPA entities
- JPA Matchers
- JPA named queries
- JPA XML
- JSecurity
- JSF
- JSF Page
- JSilver
- JSON-B
- JSP Page
- JSTL
- JTA
- Jukito
- JUnit
- Ka DI
- Keyczar
- Kibana
- KLogger
- Kodein
- Kotlin Logging
- KouInject
- KumuluzEE JTA
- LevelDB Client
- Liferay
- LiferayFaces
- Lift JTA
- Log.io
- Log4J
- Log4s
- Logback
- Logging Utils
- Logstash
- Lumberjack
- Macros
- Magicgrouplayout
- Management EJB
- MapR
- MckoiSQLDB Driver
- Memcached
- Message (MDB)
- Micro DI
- Micrometer
- Microsoft SQL Driver
- MiGLayout
- MinLog
- Mixer
- Mockito
- MongoDB Client
- Monolog
- Morphia
- MRules
- Mule
- Mule Functional Test Framework
- MultithreadedTC
- Mycontainer JTA
- MyFaces
- MySQL Driver
- Narayana Arjuna
- Needle
- Neo4j
- NLOG4J
- Nuxeo JTA/JCA
- OACC
- OAUTH
- OCPsoft Logging Utils
- OmniFaces
- OpenFaces
- OpenPojo
- OpenSAML
- OpenWS
- OPS4J Pax Logging Service
- Oracle ADF
- Oracle DB Driver
- Oracle Forms
- Orion EJB XML
- Orion Web XML
- Oscache
- OTR4J
- OW2 JTA
- OW2 Log Util
- OWASP CSRF Guard
- OWASP ESAPI
- Peaberry
- Pega
- Persistence units
- Petals EIP
- PicketBox
- PicketLink
- PicoContainer
- Play
- Play Test
- Plexus Container
- Polyforms DI
- Portlet
- PostgreSQL Driver
- PowerMock
- PrimeFaces
- Properties
- Qpid Client
- RabbitMQ Client
- RandomizedTesting Runner
- Resource Adapter
- REST Assured
- Restito
- RichFaces
- RMI
- RocketMQ Client
- Rythm Template Engine
- SAML
- Santuario
- Scalate
- Scaldi
- Scribe
- Seam
- Security Realm
- ServiceMix
- Servlet
- ShiftOne
- Shiro
- Silk DI
- SLF4J
- Snippetory Template Engine
- SNMP4J
- Socket handler logging
- Spark
- Specsy
- Spock
- Spring
- Spring Batch
- Spring Boot
- Spring Boot Actuator
- Spring Boot Cache
- Spring Boot Flo
- Spring Cloud Config
- Spring Cloud Function
- Spring Data
- Spring Data JPA
- spring DI
- Spring Integration
- Spring JMX
- Spring Messaging Client
- Spring MVC
- Spring Properties
- Spring Scheduled
- Spring Security
- Spring Shell
- Spring Test
- Spring Transactions
- Spring Web
- SQLite Driver
- SSL
- Standard Widget Toolkit (SWT)
- Stateful (SFSB)
- Stateless (SLSB)
- Sticky Configured
- Stripes
- Struts
- SubCut
- Swagger
- SwarmCache
- Swing
- SwitchYard
- Syringe
- Talend ESB
- Teiid
- TensorFlow
- Test Interface
- TestNG
- Thymeleaf
- TieFaces
- tinylog
- Tomcat
- Tornado Inject
- Trimou
- Trunk JGuard
- Twirl
- Twitter Util Logging
- UberFire
- Unirest
- Unitils
- Vaadin
- Velocity
- Vlad
- Water Template Engine
- Web Services Metadata
- Web Session
- Web XML File
- WebLogic Web XML
- Webmacro
- WebSocket
- WebSphere EJB
- WebSphere EJB Ext
- WebSphere Web XML
- WebSphere WS Binding
- WebSphere WS Extension
- Weka
- Weld
- WF Core JTA
- Wicket
- Winter
- WSDL
- WSO2
- WSS4J
- XACML
- XFire
- XMLUnit
- Zbus Client
- Zipkin
A.2. Rule story points
View the migration toolkit for applications (MTA) rule story points to estimate the level of effort required for specific migration tasks. Understanding these values helps you prioritize application migration based on complexity and resource requirements.
A.2.1. Guidelines for the level of effort estimation
View the general guidelines the migration toolkit for applications (MTA) uses for effort level estimations to determine the complexity of issues identified during the analysis. Understanding the rule story points helps you predict the time and resources required to migrate your applications effectively.
Table A.1. Guidelines for the level of effort estimation
| Level of Effort | Story Points | Description |
|---|---|---|
|
Information |
0 |
An informational warning with very low or no priority for migration. |
|
Trivial |
1 |
The migration is a trivial change or a simple library swap with no or minimal API changes. |
|
Complex |
3 |
The changes required for the migration task are complex, but have a documented solution. |
|
Redesign |
5 |
The migration task requires a redesign or a complete library change, with significant API changes. |
|
Rearchitecture |
7 |
The migration requires a complete rearchitecture of the component or subsystem. |
|
Unknown |
13 |
The migration solution is not known and may need a complete rewrite. |
A.2.2. Migration tasks categories
View the migration toolkit for applications (MTA) migration task categories that you can use to indicate the severity of a migration task. Understanding these categories helps you prioritize migration tasks based on their criticality and estimated effort.
Migration toolkit for applications uses the following categories to group issues to help prioritize the migration effort:
- Mandatory
- The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform.
- Optional
- If the migration task is not completed, the application should work, but the results might not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed.
- Potential
- The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type.
- Information
- The task is included to inform you of the existence of certain files. These might need to be examined or modified as part of the modernization effort, but changes are typically not required.
A.3. Additional resources
Appendix B. How to contribute to the MTA project
You can help the migration toolkit for applications (MTA) to cover most application builds and server configurations, including yours.
You can help with any of the following items:
- Send an email to jboss-migration-feedback@redhat.com and let us know what MTA migration rules must cover.
- Provide example applications to test migration rules.
Identify application components and problem areas that might be difficult to migrate:
- Write a short description of the problem migration areas.
- Write a brief overview describing how to solve the problem in migration areas.
- Try migration toolkit for applications on your application. Report any issues you meet. MTA uses Jira as its issue tracking system. If you encounter an issue when using MTA, submit a Jira issue.
Contribute to the migration toolkit for applications rules repository:
- Write a migration toolkit for applications rule to identify or automate a migration process.
Create a test for the new rule.
For more information, see Rule Development Guide.
Contribute to the project source code:
- Create a core rule.
- Improve MTA performance or efficiency.
Any level of involvement is greatly appreciated!
Additional resources