User Guide

Service Parameters

To run the service, specify the source object storage and identify the input data set.

REQUIRED: "source"

Identify the transform source object storage, where the input resides. The source object storage details appear in the Model9 agent configuration file.

Required Keywords for "source"

{
  "source": {
    "url": "<URL>",
    "api": "<API>",
    "bucket": "<USER_BUCKET>",
    "user": "<USERID>",
    "password": "<PASSWORD>"
  }
}

Optional Keywords for "source"

{
  "source": {
    "useS3V4Signatures": "false|true"
  }
}
KeywordDescriptionRequiredDefault

url

The object storage / proxy url

YES

-

api

The api-protocol used by this object storage / proxy

YES

-

bucket

The bucket defined within the object storage / proxy

YES

-

user

The userid provided by the object storage / proxy

YES

-

password

The password provided by the object storage / proxy

YES

-

useS3V4Signatures

Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only.

NO

false

OPTIONAL: "target"

Identify the transform target object storage. Values not specified will be taken from the "source" parameter.

{
  "target": {
    "url": "<URL>",
    "api": "<API>",
    "bucket": "<USER-BUCKET>",
    "user": "<USERID>",
    "password": "<PASSWORD>",
    "useS3V4Signatures": "false|true"
  }
}
KeywordDescriptionRequiredDefault

url

The object storage / proxy url

NO

Taken from "source"

api

The api-protocol used by this object storage / proxy

NO

Taken from "source"

bucket

The bucket defined within the object storage / proxy

NO

Taken from "source"

user

The userid provided by the object storage / proxy

NO

Taken from "source"

password

The password provided by the object storage / proxy

NO

Taken from "source"

useS3V4Signatures

Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only.

NO

false

REQUIRED: "input"

If you specify VSAM keywords for a sequential input data set, the transform will be performed and a warning message will be issued

Required Keywords for "input"

{
  "input": {
    "name": "<DSN>",
    "complex": "<group-SYSPLEX>"
  }
}

Optional Keywords for "input"

{
  "input": {
    "type": "backup|archive|import",
    "entry": "0|<N>",
    "prefix": "model9|<USER-PREFIX>",
    "recordBinary": "false|true",
    "recordCharset": "<CHARSET>",
    "vsam": {
      "keyBinary": "false|true",
      "keyCharset": "<CHARSET>"
    }
  }
}
DescriptionDefault

name

Name of the original data set

Mainframe data set legal name, case insensitive

complex

The Model9 resource complex name as defined in the agent configuration file.

String representing the complex

type

The type of the input data set, according to the Model9 Cloud Data Manager policy that created it:

  • "backup" - A backup copy (default)

  • "archive" - An archived data set

  • "import" - A data set imported from tape

"backup"

(case insensitive)

entry

When the type is "backup", "entry" represents the generation. The default is "0", meaning the latest backup copy. Entry "1" would be the backup copy that was taken prior to the latest copy, and so on.

"0"

prefix

The environment prefix as defined in the agent configuration file.

"model9"

recordBinary

Whether the record input is binary. Qualifies for all "record" input (PS, PDS, VSAM data)

"false"

(case insensitive)

recordCharset

If the record input is not binary, what will be the character set of the input. Qualifies for all "record" input (PS, PDS, VSAM data)

"IBM-1047"

keyBinary

In case the input is VSAM data set, whether the VSAM key is binary. The output is in base64 format

"false"

(case insensitive)

keyCharset

In case the input is VSAM data set and the key is not binary, the character set of the VSAM key

"IBM-1047"

OPTIONAL: "output"

The output is the transformed data of the MF data set, accessible as S3 object

  • When transforming a file with the same name as an existing file in the target, the existing file will be replaced by the newly transformed file.

    Note that the service does not delete previously transformed files but rather overwrites files with the same name, so when re-transforming a file using the "split" function, ensure to remove any previously transformed files to avoid having split files of different versions.

  • When splitting a file, wait for the successful completion of the transform function before continuing with the processing, to insure all the parts of a the file were created.

  • Specifying "text" format for a "binary" input will cause the transform to fail.

{
  "output": {
    "prefix": "model9|<USER-PREFIX>",
    "compression": "none|gzip",
    "format": "JSON|text|CSV|RAW",
    "charset": "UTF8",
    "endWithNewLine": "false|true",
    "splitBySize": "<nnnnb/m/g>",
    "splitByRecords": "<n>"
  }
} 
KeywordDescriptionDefault

prefix

Prefix to be added to the object name:

"Prefix"/"object name"

"transform"

compression

Should the output be compressed: "gzip"\"none"

"gzip" (case sensitive)

format

The format of the output file: "JSON"|"Text"|"CSV"|"RAW"

"JSON"

charset

If the key input is not binary, this keyword specifies what will be the character set of the output. Currently only "UTF8" is supported

"UTF8"

endWithNewLine

A newline will be added at the end of the file, before end of file. This is required by some applications.

false

splitBySize

Whether to split the output files to several files by the requested size, for example, "3000b", "1000m", "1g". The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.

  • The keyword is mutually exclusive with splitByRecords

  • The minimum value for this parameter is 1024 bytes, it is not possible to specify a smaller size

  • When specifying a number without a unit, the service will use bytes, for example: splitBySize":"1024"

    The service will split the data set into files the size of 1024 bytes.

  • The function will not split a record in the middle

  • The last part can be smaller than the specified size

  • Specifying the value "0" indicates no split by size will be performed

0

No split by size will be performed

includeRdw

Whether to include the Record Descriptor Word (RDW) for data set with record format V or VB. Note: Only supported for format RAW output.

false

splitByRecords

Whether to split the output files to several files, according to output records. The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.

  • The keyword is mutually exclusive with splitBySize

  • The function will not split a record in the middle

  • The last part can include less records than specified

  • Specifying the value "0" indicates no split by records will be performed

0

No split by records will be performed

Service parameters samples

Transforming a plain text data set

Transform the latest backup of a plain text data set, charset IBM-1047, converted to UTF8 and compressed.

{
  "input": {
    "name": "SAMPLE.TEXT",
    "complex": "group-PLEX1"
  },
  "output": {
    "format": "text"
  },
  "source": {
    "url": "https://s3.amazonaws.com",
    "api": "aws-s3",
    "bucket": "prod-bucket",
    "user": "sdsdDVDCsxadA43TERVGFBSDSSDff",
    "password": "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  }
}

Transforming an unloaded DB2 table

Transform the latest backup of an unloaded DB2 table, charset IBM-1047, converted to UTF8 and compressed, located with a specific prefix:

{
  "input": {
    "name": "DB2.UNLOADED.SEQ",
    "complex": "group-PLEX1"
  },
  "output": {
    "format": "text"
  },
  "source": {
    "url": "https://s3.amazonaws.com",
    "api": "aws-s3",
    "bucket": "prod-bucket",
    "user": "sdsdDVDCsxadA43TERVGFBSDSSDff",
    "password": "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  },
  "output": {
    "prefix": "DBprodCustomers"
  }
}

Transforming a VSAM file using the defaults

When transforming a VSAM file, the defaults are a text key and binary data, transforming to a JSON output file:

{
  "input": {
    "name": "SAMPLE.VSAM",
    "complex": "group-PLEX1"
  },
  "source": {
    "url": "https://s3.amazonaws.com",
    "api": "aws-s3",
    "bucket": "prod-bucket",
    "user": "sdsdDVDCsxadA43TERVGFBSDSSDff",
    "password": "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  }
}

Transforming a VSAM text file to CSV

Specify a text data, transforming to a CSV output file:

{
  "input": {
    "name": "SAMPLE.VSAM",
    "complex": "group-PLEX1"
  },
  "vsam": {
    "keyBinary": "false|true",
    "keyCharset": "<CHARSET>"
  },
  "output": {
    "format": "CSV"
  },
  "source": {
    "url": "https://s3.amazonaws.com",
    "api": "aws-s3",
    "bucket": "prod-bucket",
    "user": "sdsdDVDCsxadA43TERVGFBSDSSDff",
    "password": "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  }
}

Transforming on Azure Storage using OAuth2

When transforming data on Azure blob storage with OAuth2 set the "api" to azureblob-oauth2 and use the azureOauth section to specify Azure OAuth arguments as follows:

{
  "input": {
    "name": "SAMPLE.PS",
    "complex": "group-PLEX1"
  },
  "vsam": {
    "keyBinary": "false|true",
    "keyCharset": "<CHARSET>"
  },
  "output": {
    "format": "CSV"
  },
  "source": {
    "api": "azureblob-oauth2",
    "url": "https://<azure-storage-account>.blob.core.windows.net",
    "bucket": "<azure-container-name>",
    "user": "<azure-application-uuid>",
    "password": "<azure-application-client-secret>",
    "azureOauth": {
      "oauthEndpoint": "<azure-oauth-endpoint>",
      "storageAccount": "<azure-storage-account>",
      "oauthAudience": "<azure-oauth-audience>",
      "credentialType": "<azure-credential-type>"
    }
  }
}

Table: Azure OAuth2 Arguments

Field NameDescriptionRequiredDefault Value

oauthEndpoint

The OAuth2 endpoint from which an OAuth2 token will be request. This value will usually take the form of: https://login.microsoftonline.com/<tenent-id>/oauth2/token

true

N/A

storageAccount

The name of the Azure storage account which contains the Azureblob container.

true

N/A

oauthAudience

OAuth2 audience

false

https://storage.azure.com

credentialType

OAuth2 cretential type

false

clientCredentialsSecret

Service response and log

The transform service is invoked as an HTTP request. It returns:

HTTP status

CodeDescription

200

OK

400

Bad user input or unsupported data set

500

Unexpected error

HTTP response

{
  "status": "OK|WARNING|ERROR",
  "outputName": "<OUTPUT-NAME>",
  "inputName": "<DSN>",
  "outputCompression": "none|gzip",
  "outputSizeInBytes": "<SIZE-IN_BYTES>",
  "outputFormat": "JSON|text|CSV"
}
Output keywordDescription

status

  • OK - all is well, no log records

  • WARNING - minor problem e.g. specifying parameters that do not fit the input data set. The log is returned.

  • ERROR - major problem such e.g. unable to read the input data or problem in communication. The log is returned.

outputName

The object name as appears in the target object storage

inputName

The input data set name

outputCompression

The compression type as selected in the input parameters / default

outputSizeInBytes

The number of bytes on the output object

outputFormat

The format as selected in the input parameters / default

In case of a WARNING or an ERROR - the HTTP response will also contain log messages.

Informational messages are printed only to service log and not to the HTTP response. The service log can be viewed on the AWS console when executing the service from AWS, or the docker log, when executing the service on-premises.

Log

{
  "log": [
    "<INFO-MESSAGE>",
    "<WARNING-MESSAGE>",
    "<ERROR-MESSAGE>"
  ]
} 

Service response and log samples

Status OK sample

{
  "status": "OK",
  "outputName": "transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=a641d670-2d05-41e7-9dd3-7815e1b2d4c4",
  "inputName": "QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS",
  "outputCompression": "NONE",
  "outputSizeInBytes": 97,
  "outputFormat": "JSON"
}

Status WARNING sample

{
  "log": [
    "ZM9K001I Transform service started",
    "ZM9K108W Specifying input parameter vsam is ignored for input data set with DSORG PS",
    "ZM9K002I Transform service completed successfully, output is transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=d779fbf9-da6b-495b-b6b9-de7583905f19"
  ],
  "status": "WARNING",
  "outputName": "transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=d779fbf9-da6b-495b-b6b9-de7583905f19",
  "inputName": "QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS",
  "outputCompression": "NONE",
  "outputSizeInBytes": 97,
  "outputFormat": "JSON"
}

Status ERROR sample

{
  "status": "ERROR",
  "log": [
    "ZM9K001I Transform service started",
    "ZM9K008E The input was not found: name QA.SMS.MCBK.DSERV.TXT.NON, archive false, entry (0)"
  ]
}

Input format support

Supported formats

  • SMS-managed data sets

  • Non-SMS managed data sets

  • Sequential and extended-sequential data sets with the following RECFM:

    • V

    • VB

    • F

    • FB

    • FBA

  • Non-extended VSAM KSDS data sets

Unsupported formats

  • RRDS, VRRDS, LINEAR, ESDS

  • Extended format data sets with compression or encryption

  • PDS data sets

  • RECFM not mentioned above (U)

Output format support

  • Text

  • JSON

  • CSV

DB2 Image Copy Transform Guide

Configuration

  1. Make sure that <M9_HOME>/scripts/transform-service.sh has execute permissions. If not, add it by using chmod a+x <M9_HOME>/scripts/transform-service.sh.

  2. Copy M9XFDB2 from Model9's SAMPLIB data set to a PDS data set of your choosing.

  3. Edit M9XFDB2 and replace the placeholders enclosed with angle brackets with the following:

Table: Placeholders

Placeholder NameReplace with ...

<M9_SAMP>

Your Model9 SAMPLIB data set.

<M9_HOME>

Your Model9 installation directory.

<DB2_SDSNLOAD>

Your DB2's SDSNLOAD data set.

<TABLE_NAME>

The name of the table to be transformed.

<SCHEMA_NAME>

The schema of the table (can be seen under column CREATOR in SYSIBM.SYSTABLES.)

<DB2_SUBSYS>

The name of the DB2 subsystem.

<XFORM_SVC_URL>

The endpoint URL of the installed transform service.

  1. Replace the remaining placeholders in the JCL as described in this manual.

Execute and verify results

When done, submit the job and make sure it ends with MAXCC of 0.

Via SDSF, verify that the transform service was in fact called and completed successfully. Successful output would look something like this:

{
  "status": "OK",
  "outputNames": [
    "transform-output/M9.SHY.DB2.IMGCPY.M9DB.M9SEG4"
  ],
  "inputName": "M9.SHY.DB2.IMGCPY.M9DB.M9SEG4",
  "outputCompression": "NONE",
  "outputSizeInBytes": 1064,
  "outputFormat": "CSV"
}

Supported DB2 Column Types

Table 5. Supported DB2 Column Types for Transformation

DB2 SQL CodeName

392-3

TIMESTAMP

2448-9

TIMESTAMP WITH TIME ZONE

384-5

DATE

388-9

TIME

452-3

CHAR

448-9, 456-7

VARCHAR

480-1

REAL/FLOAT/DOUBLE

484-5

DECIMAL/DEC/NUMERIC

492-3

BIGINT

496-7

INTEGER/INT

500-1

SMALLINT

Last updated