Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This guide will instruct you how to set up and invoke the Model9 Gravity service from the mainframe, using JCL. The service will transform a Model9 data set backup copy / archive / import into a readable file in the cloud. Once transformed, the readable file can be accessed directly or via data analytics tools.
Model9 is responsible for delivering the data set from the mainframe to the cloud / on-premises storage. The data set is delivered as a backup copy, an archive or imported tape data set, and provides the input to the transform service.
This free tool will allow you to invoke the transform service from z/OS. If cURL is not installed under /usr/bin
, edit line 4 and add the path where the cURL module resides.
Copy the following script to /usr/lpp/model9/gravity-run.sh
:
Copy the following JCL to a local library, update the JOBCARD according to your site standards:
Copy the following object storage variables from the Model9 agent configuration file:
<URL>
<API>
<BUCKET>
<USER>
<PASSWORD>
The "complex” name represents the group of resources that the Model9 agent can access. By default, this group is named group-<SYSPLEX>
and it is shared by all the agents in the same sysplex. The transform JCL specifies the default, using the z/OS system symbol &SYSPLEX
.
If the default was kept for "complex" in the Model9 agent configuration file, no change is needed
If the "complex” name was changed in the Model9 agent configuration file, change the "complex” in the JCL accordingly.
By default, the JCL will create a transformed copy of your input data set, in the same bucket, with the prefix: /transform/&LYR4/&LMON/&LDAY
. The prefix is using the following z/OS system symbols:
&LYR4
- The year in 4 digits, e.g. 2019
&LMON
- The month in 2 digits, e.g. 08
&LDAY
- The day in the month in 2 digits, e.g. 10 You can change the prefix according to your needs.
The data set to be transformed should be a backup copy, an archive or imported tape data set, delivered by the Model9 agent:
<DATA-SET>
- the name of the data set
<BACKUP|ARCHIVE|IMPORT>
- if the data set is a Model9 backup, archive or import
To change the attributes of the input and the output, and for a full description of the service parameters, see Service parameters.
Submit the job and view the output. See Service response and log samples for sample output.
Based on the returned response, the outputName will point to the path inside the bucket where the transformed data resides. See Service response and log samples.
Instructions for installing Model9 Gravity in an on-premise environment.
The following guides provides details on how to install and configure the Model9 Gravity application.
8 vCPU, 32gb RAM instance, 1TB of storage (preferably SSD).
AWS instance types:
m5d.2xlarge
m5ad.2xlarge
docker
is expected to be installed on the instance.
A user with docker privileges will be required for running docker commands.
Depending on the system's configuration, root access might be required for some steps. Using a root user for the installation is recommended.
Obtain the Model9 Gravity installation package from Model9 and move it to the target installation system.
The recommended location for the package files on the installation system is:
In the snippet above, <version>
represents the package version number. E.g., "1.0.0".
Exported environment variables will be used by the following installation steps.
$GRAVITY_HOME/work
is the work directory used by the service for data processing. It should be mounted on a block-device on which enough space is available.
A user with docker privileges or root access will be required for the following steps.
model9
is chosen here as the default database password.
$GRAVITY_HOME/keys
folderThe key stores provided in the package include the default self-signed Model9 certificates for setting up TLS in the Gravity service. If required, these key stores can be replaced with key stores provided by the organization.
After creating the application.properties
file edit it and fill in the missing values:
vi
is usually available by default on Linux systems. If more convenient, other editors such as nano can also be used.
If the default database password has been changed, use the model9.gravity.datasource.password
configuration option to change it.
Change the TZ
environment variable as appropriate.
Change the -Xmx
value to reflect the amount of available memory for the service (in GBs) - Remember to leave a few free GBs for the operating system.
443
is the default secure port, if running the service on another port, this value should be changed as well.
For details of the configuration options and their default values, see the page.
To run the service, specify the source object storage and identify the input data set.
Identify the transform source object storage, where the input resides. The source object storage details appear in the Model9 agent configuration file.
Identify the transform target object storage. Values not specified will be taken from the "source" parameter.
If you specify VSAM keywords for a sequential input data set, the transform will be performed and a warning message will be issued
The output is the transformed data of the MF data set, accessible as S3 object
When transforming a file with the same name as an existing file in the target, the existing file will be replaced by the newly transformed file.
Note that the service does not delete previously transformed files but rather overwrites files with the same name, so when re-transforming a file using the "split" function, ensure to remove any previously transformed files to avoid having split files of different versions.
When splitting a file, wait for the successful completion of the transform function before continuing with the processing, to insure all the parts of a the file were created.
Specifying "text" format for a "binary" input will cause the transform to fail.
Transform the latest backup of a plain text data set, charset IBM-1047, converted to UTF8 and compressed.
Transform the latest backup of an unloaded DB2 table, charset IBM-1047, converted to UTF8 and compressed, located with a specific prefix:
When transforming a VSAM file, the defaults are a text key and binary data, transforming to a JSON output file:
Specify a text data, transforming to a CSV output file:
When transforming data on Azure blob storage with OAuth2 set the "api" to azureblob-oauth2 and use the azureOauth
section to specify Azure OAuth arguments as follows:
Table: Azure OAuth2 Arguments
The transform service is invoked as an HTTP request. It returns:
In case of a WARNING or an ERROR - the HTTP response will also contain log messages.
Informational messages are printed only to service log and not to the HTTP response. The service log can be viewed on the AWS console when executing the service from AWS, or the docker log, when executing the service on-premises.
SMS-managed data sets
Non-SMS managed data sets
Sequential and extended-sequential data sets with the following RECFM:
V
VB
F
FB
FBA
Non-extended VSAM KSDS data sets
RRDS, VRRDS, LINEAR, ESDS
Extended format data sets with compression or encryption
PDS data sets
RECFM not mentioned above (U)
Text
JSON
CSV
Make sure that <M9_HOME>/scripts/transform-service.sh
has execute permissions. If not, add it by using chmod a+x <M9_HOME>/scripts/transform-service.sh
.
Copy M9XFDB2
from Model9's SAMPLIB data set to a PDS data set of your choosing.
Edit M9XFDB2
and replace the placeholders enclosed with angle brackets with the following:
Table: Placeholders
Replace the remaining placeholders in the JCL as described in this manual.
When done, submit the job and make sure it ends with MAXCC of 0.
Via SDSF, verify that the transform service was in fact called and completed successfully. Successful output would look something like this:
Table 5. Supported DB2 Column Types for Transformation
Monetize unlocked mainframe data in business intelligence (BI), analytics and cloud applications
Leverage any disk data or historical tape data for use by analytics services and BI tools. Mainframe data migrated to object storage on-premises or in the cloud can be transformed to standard data formats without requiring any access to the mainframe, instantly providing it for use in cloud applications and analytics tools.
For any questions, contact us at support@model9.io
Model9 website: www.model9.io
Keyword | Description | Required | Default |
---|---|---|---|
Keyword | Description | Required | Default |
---|---|---|---|
Description | Default | |
---|---|---|
Keyword | Description | Default |
---|---|---|
Field Name | Description | Required | Default Value |
---|---|---|---|
Code | Description |
---|---|
Output keyword | Description |
---|---|
Placeholder Name | Replace with ... |
---|---|
DB2 SQL Code | Name |
---|---|
url
The object storage / proxy url
YES
-
api
The api-protocol used by this object storage / proxy
YES
-
bucket
The bucket defined within the object storage / proxy
YES
-
user
The userid provided by the object storage / proxy
YES
-
password
The password provided by the object storage / proxy
YES
-
useS3V4Signatures
Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only.
NO
false
url
The object storage / proxy url
NO
Taken from "source"
api
The api-protocol used by this object storage / proxy
NO
Taken from "source"
bucket
The bucket defined within the object storage / proxy
NO
Taken from "source"
user
The userid provided by the object storage / proxy
NO
Taken from "source"
password
The password provided by the object storage / proxy
NO
Taken from "source"
useS3V4Signatures
Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and Cohesity. Relevant for api "S3" only.
NO
false
name
Name of the original data set
Mainframe data set legal name, case insensitive
complex
The Model9 resource complex name as defined in the agent configuration file.
String representing the complex
type
The type of the input data set, according to the Model9 Cloud Data Manager policy that created it:
"backup" - A backup copy (default)
"archive" - An archived data set
"import" - A data set imported from tape
"backup"
(case insensitive)
entry
When the type is "backup", "entry" represents the generation. The default is "0", meaning the latest backup copy. Entry "1" would be the backup copy that was taken prior to the latest copy, and so on.
"0"
prefix
The environment prefix as defined in the agent configuration file.
"model9"
recordBinary
Whether the record input is binary. Qualifies for all "record" input (PS, PDS, VSAM data)
"false"
(case insensitive)
recordCharset
If the record input is not binary, what will be the character set of the input. Qualifies for all "record" input (PS, PDS, VSAM data)
"IBM-1047"
keyBinary
In case the input is VSAM data set, whether the VSAM key is binary. The output is in base64 format
"false"
(case insensitive)
keyCharset
In case the input is VSAM data set and the key is not binary, the character set of the VSAM key
"IBM-1047"
prefix
Prefix to be added to the object name: "Prefix"/"object name"
"transform"
compression
Should the output be compressed: "gzip"\
"no
format
The format of the output file: "JSON"\
"Text"|"CSV"|"RAW"
charset
If the key input is not binary, this keyword specifies what will be the character set of the output. Currently only "UTF8" is supported
"UTF8"
endWithNewLine
A newline will be added at the end of the file, before end of file. This is required by some applications.
false
splitBySize
Whether to split the output files to several files by the requested size, for example, "3000b", "1000m", "1g". The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.
The keyword is mutually exclusive with splitByRecords
The minimum value for this parameter is 1024 bytes, it is not possible to specify a smaller size
When specifying a number without a unit, the service will use bytes, for example: splitBySize":"1024"
The service will split the data set into files the size of 1024 bytes.
The function will not split a record in the middle
The last part can be smaller than the specified size
Specifying the value "0" indicates no split by size will be performed
0
No split by size will be performed
includeRdw
Whether to include the Record Descriptor Word (RDW) for data set with record format V or VB. Note: Only supported for format RAW output.
false
splitByRecords
Whether to split the output files to several files, according to output records. The output files will be numbered "<file-name>.1", "<file-name>.2", "<file-name>.3" and so on.
The keyword is mutually exclusive with splitBySize
The function will not split a record in the middle
The last part can include less records than specified
Specifying the value "0" indicates no split by records will be performed
0
No split by records will be performed
oauthEndpoint
The OAuth2 endpoint from which an OAuth2 token will be request. This value will usually take the form of: https://login.microsoftonline.com/<tenent-id>/oauth2/token
true
N/A
storageAccount
The name of the Azure storage account which contains the Azureblob container.
true
N/A
oauthAudience
OAuth2 audience
false
https://storage.azure.com
credentialType
OAuth2 cretential type
false
clientCredentialsSecret
200
OK
400
Bad user input or unsupported data set
500
Unexpected error
status
OK - all is well, no log records
WARNING - minor problem e.g. specifying parameters that do not fit the input data set. The log is returned.
ERROR - major problem such e.g. unable to read the input data or problem in communication. The log is returned.
outputName
The object name as appears in the target object storage
inputName
The input data set name
outputCompression
The compression type as selected in the input parameters / default
outputSizeInBytes
The number of bytes on the output object
outputFormat
The format as selected in the input parameters / default
<M9_SAMP>
Your Model9 SAMPLIB
data set.
<M9_HOME>
Your Model9 installation directory.
<DB2_SDSNLOAD>
Your DB2's SDSNLOAD data set.
<TABLE_NAME>
The name of the table to be transformed.
<SCHEMA_NAME>
The schema of the table (can be seen under column CREATOR
in SYSIBM.SYSTABLES
.)
<DB2_SUBSYS>
The name of the DB2 subsystem.
<XFORM_SVC_URL>
The endpoint URL of the installed transform service.
392-3
TIMESTAMP
2448-9
TIMESTAMP WITH TIME ZONE
384-5
DATE
388-9
TIME
452-3
CHAR
448-9, 456-7
VARCHAR
480-1
REAL/FLOAT/DOUBLE
484-5
DECIMAL/DEC/NUMERIC
492-3
BIGINT
496-7
INTEGER/INT
500-1
SMALLINT