• Thanks Jeff,

    the file is ascii, record structure is

    line 1 [Block Number] [No Values in Block]

    lines 2...341 [value] [value] [value]

    e.g. the first 4 lines are

    1 1018

    0.251726450000000000E+07 0.251729650000000000E+07 0.149568940363419000E+08

    0.129347882411886640E+08 -0.194328028922087426E+06 -0.294282022326385668E+05

    -0.383682118344389664E+03 -0.145789684500091479E+02 0.583642027748376771E-01

    There follows a further 337 lines containing 3 float values each, giving a total of 1020 float values per block, however the initial line of the block gives 1018 as the No Values in Block - the last two values in the block are zero and would be ignored.

    A sample is attached. I have cut down this file so there are only 10 blocks of data, the original file has around 250 blocks and is over 6Mb

    My first idea has been rejected - it was simply to store each block of data in a TEXT field. - The user rejected this because they might want to query individual values or search for value ranges at some point in the future.

    Then I mused on the idea of Pivoting the data so each row represents a position in the block and each column represents an entire block - I assume SQL 2005 no longer has the 8060 record size limit - so I can have 256(blocks) times 50(files) worth of float values in a row. It would still be awkward to reference the data as column names would be File1Block1 File1Block2 ... FileNBlockN.

    I Look forward to anyone elses suggestions.