• Ah.. if it was that easy.. 1 to 1 mapping.. (sigh).

    On the migration I did 1 segment was converted to 200+ Tables, each with their own set of primary key columns

    Jeff,

    As Ed pointed out there are several types of segments and several types of records. This on its own needs to be taken in consideration when doing the mapping.

    But we are also talking COBOL which adds another level of complexity unless the application was designed to be normalized from the start (unlikely but possible).

    The following is not necessarily how the application the OP has to deal with is structured hence the need to talk with the COBOL developers as they alone will be able to supply the required information. But it is a very common practice within old COBOL applications.

    One way to understand what may be involved here ...

    Consider all the sys.* tables. sys.objects, sys.columns, sys.foreignkeys, sys.indexes, sys.types, sys.schemas etc

    These are considered metadata tables.

    On a COBOL/IMS application all the above tables would be defined in a single physical segment - metadata - with the following IMS layout (example)

    segmentname - metadata

    Datarecord

    key field - char(50)

    data field - char(4000)

    So anyone looking at the IMS structure doing a direct convert to Sql server would just create a single table with 2 columns.

    But the COBOL application internaly redefines the above as follows on a higher level

    Schemasrecord redefines datarecord

    Key field -- cobol group record

    recordtype - char(2) -- value SC on data record

    schemaid - int (cobol PIC 9(4))

    filler - char(44) -- empty data

    data field -- cobol group record

    name - char(30)

    principal_id - int (cobol PIC 9(4))

    filler - char(xx)

    objectsrecord redefines datarecord

    Key field -- cobol group record

    recordtype - char(2) -- value OB on data record

    objectid - int (cobol PIC 9(4))

    filler - char(44) -- empty data

    data field -- cobol group record

    name - char(30)

    schema_id - int (cobol PIC 9(4))

    type - char(2)

    type_desc - char(60)

    principal_id - int (cobol PIC 9(4))

    filler - char(xx)

    indexesrecord redefines datarecord

    Key field -- cobol group record

    recordtype - char(2) -- value IX on data record

    objectid - int (cobol PIC 9(4))

    indexid - int (cobol PIC 9(4))

    filler - char(40) -- empty data

    data field -- cobol group record

    name - char(30)

    schema_id - int (cobol PIC 9(4))

    type - char(2)

    type_desc - char(60)

    is_unique - int (cobol PIC 9(1))

    data_space_id - int (cobol PIC 9(1))

    ignore_dup_key - int (cobol PIC 9(1))

    filler - char(xx)

    indexescolumnsrecord redefines datarecord

    Key field -- cobol group record

    recordtype - char(2) -- value IC on data record

    objectid - int (cobol PIC 9(4))

    indexid - int (cobol PIC 9(4))

    indexcolumnid - int (cobol PIC 9(4))

    filler - char(40) -- empty data

    data field -- cobol group record

    name - char(30)

    column_id - int (cobol PIC 9(4))

    key_ordinal - int (cobol PIC 9(4))

    partition_ordinal - int (cobol PIC 9(4))

    is_descending_key - int (cobol PIC 9(1))

    is_included_column - int (cobol PIC 9(1))

    filler - char(xx)

    The above is what common on the Cobol world. and it is also what makes it hard to convert to a relational database.