User Guide
Service Parameters
To run the service, specify the source object storage and identify the input data set.
REQUIRED: "source"
Identify the transform source object storage, where the input resides. The source object storage details appear in the Model9 agent configuration file.
Required Keywords for "source"
Optional Keywords for "source"
Keyword | Description | Required | Default |
---|---|---|---|
| The object storage / proxy url | YES | - |
| The api-protocol used by this object storage / proxy | YES | - |
| The bucket defined within the object storage / proxy | YES | - |
| The userid provided by the object storage / proxy | YES | - |
| The password provided by the object storage / proxy | YES | - |
| Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only. | NO | false |
OPTIONAL: "target"
Identify the transform target object storage. Values not specified will be taken from the "source" parameter.
Keyword | Description | Required | Default |
---|---|---|---|
| The object storage / proxy url | NO | Taken from "source" |
| The api-protocol used by this object storage / proxy | NO | Taken from "source" |
| The bucket defined within the object storage / proxy | NO | Taken from "source" |
| The userid provided by the object storage / proxy | NO | Taken from "source" |
| The password provided by the object storage / proxy | NO | Taken from "source" |
| Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only. | NO | false |
REQUIRED: "input"
If you specify VSAM keywords for a sequential input data set, the transform will be performed and a warning message will be issued
Required Keywords for "input"
Optional Keywords for "input"
Description | Default | |
---|---|---|
| Name of the original data set | Mainframe data set legal name, case insensitive |
| The Model9 resource complex name as defined in the agent configuration file. | String representing the complex |
| The type of the input data set, according to the Model9 Cloud Data Manager policy that created it:
| "backup" (case insensitive) |
| When the type is "backup", "entry" represents the generation. The default is "0", meaning the latest backup copy. Entry "1" would be the backup copy that was taken prior to the latest copy, and so on. | "0" |
| The environment prefix as defined in the agent configuration file. | "model9" |
| Whether the record input is binary. Qualifies for all "record" input (PS, PDS, VSAM data) | "false" (case insensitive) |
| If the record input is not binary, what will be the character set of the input. Qualifies for all "record" input (PS, PDS, VSAM data) | "IBM-1047" |
| In case the input is VSAM data set, whether the VSAM key is binary. The output is in base64 format | "false" (case insensitive) |
| In case the input is VSAM data set and the key is not binary, the character set of the VSAM key | "IBM-1047" |
OPTIONAL: "output"
The output is the transformed data of the MF data set, accessible as S3 object
When transforming a file with the same name as an existing file in the target, the existing file will be replaced by the newly transformed file.
Note that the service does not delete previously transformed files but rather overwrites files with the same name, so when re-transforming a file using the "split" function, ensure to remove any previously transformed files to avoid having split files of different versions.
When splitting a file, wait for the successful completion of the transform function before continuing with the processing, to insure all the parts of a the file were created.
Specifying "text" format for a "binary" input will cause the transform to fail.
Keyword | Description | Default |
---|---|---|
| Prefix to be added to the object name: "Prefix"/"object name" | "transform" |
| Should the output be compressed: "gzip"\"none" | "gzip" (case sensitive) |
| The format of the output file: "JSON"|"Text"|"CSV"|"RAW" | "JSON" |
| If the key input is not binary, this keyword specifies what will be the character set of the output. Currently only "UTF8" is supported | "UTF8" |
| A newline will be added at the end of the file, before end of file. This is required by some applications. | false |
| Whether to split the output files to several files by the requested size, for example, "3000b", "1000m", "1g". The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.
| 0 No split by size will be performed |
| Whether to include the Record Descriptor Word (RDW) for data set with record format V or VB. Note: Only supported for format RAW output. | false |
| Whether to split the output files to several files, according to output records. The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.
| 0 No split by records will be performed |
Service parameters samples
Transforming a plain text data set
Transform the latest backup of a plain text data set, charset IBM-1047, converted to UTF8 and compressed.
Transforming an unloaded DB2 table
Transform the latest backup of an unloaded DB2 table, charset IBM-1047, converted to UTF8 and compressed, located with a specific prefix:
Transforming a VSAM file using the defaults
When transforming a VSAM file, the defaults are a text key and binary data, transforming to a JSON output file:
Transforming a VSAM text file to CSV
Specify a text data, transforming to a CSV output file:
Transforming on Azure Storage using OAuth2
When transforming data on Azure blob storage with OAuth2 set the "api" to azureblob-oauth2 and use the azureOauth
section to specify Azure OAuth arguments as follows:
Table: Azure OAuth2 Arguments
Field Name | Description | Required | Default Value |
---|---|---|---|
| The OAuth2 endpoint from which an OAuth2 token will be request. This value will usually take the form of: |
| N/A |
| The name of the Azure storage account which contains the Azureblob container. |
| N/A |
| OAuth2 audience |
|
|
| OAuth2 cretential type |
|
|
Service response and log
The transform service is invoked as an HTTP request. It returns:
HTTP status
Code | Description |
---|---|
200 | OK |
400 | Bad user input or unsupported data set |
500 | Unexpected error |
HTTP response
Output keyword | Description |
---|---|
|
|
| The object name as appears in the target object storage |
| The input data set name |
| The compression type as selected in the input parameters / default |
| The number of bytes on the output object |
| The format as selected in the input parameters / default |
In case of a WARNING or an ERROR - the HTTP response will also contain log messages.
Informational messages are printed only to service log and not to the HTTP response. The service log can be viewed on the AWS console when executing the service from AWS, or the docker log, when executing the service on-premises.
Log
Service response and log samples
Status OK sample
Status WARNING sample
Status ERROR sample
Input format support
Supported formats
SMS-managed data sets
Non-SMS managed data sets
Sequential and extended-sequential data sets with the following RECFM:
V
VB
F
FB
FBA
Non-extended VSAM KSDS data sets
Unsupported formats
RRDS, VRRDS, LINEAR, ESDS
Extended format data sets with compression or encryption
PDS data sets
RECFM not mentioned above (U)
Output format support
Text
JSON
CSV
DB2 Image Copy Transform Guide
Configuration
Make sure that
<M9_HOME>/scripts/transform-service.sh
has execute permissions. If not, add it by usingchmod a+x <M9_HOME>/scripts/transform-service.sh
.Copy
M9XFDB2
from Model9's SAMPLIB data set to a PDS data set of your choosing.Edit
M9XFDB2
and replace the placeholders enclosed with angle brackets with the following:
Table: Placeholders
Placeholder Name | Replace with ... |
---|---|
| Your Model9 |
| Your Model9 installation directory. |
| Your DB2's SDSNLOAD data set. |
| The name of the table to be transformed. |
| The schema of the table (can be seen under column |
| The name of the DB2 subsystem. |
| The endpoint URL of the installed transform service. |
Replace the remaining placeholders in the JCL as described in this manual.
Execute and verify results
When done, submit the job and make sure it ends with MAXCC of 0.
Via SDSF, verify that the transform service was in fact called and completed successfully. Successful output would look something like this:
Supported DB2 Column Types
Table 5. Supported DB2 Column Types for Transformation
DB2 SQL Code | Name |
---|---|
392-3 | TIMESTAMP |
2448-9 | TIMESTAMP WITH TIME ZONE |
384-5 | DATE |
388-9 | TIME |
452-3 | CHAR |
448-9, 456-7 | VARCHAR |
480-1 | REAL/FLOAT/DOUBLE |
484-5 | DECIMAL/DEC/NUMERIC |
492-3 | BIGINT |
496-7 | INTEGER/INT |
500-1 | SMALLINT |
Last updated