• Well, what I have in mind is a strategic attack on the problem, once we are agreed as to what the problem is, and we find enough enthusiastic people to help out.

    1/ First phase. Work towards a standard that is simple and resilient, by discussion and experiment. Use the JSON philosophy of keeping the standard as bullet-proof as possible. I think that one might even get a backward compatibility with the real CSV. Test it out with code that we can throw together quickly; get something that meets the core requirements well. Learn from history (e.g. the over-complexity of YAML, the ridiculous volubility of XML). Try informal tests such as copying AdventureWorks to PostgreSQL, and back again, and then comparing them.

    2/ Define the first draft standard. Test it until it really hurts.

    3/ Publish v1 of the draft standard

    4/ Create good parsers in a variety of languages, and ways of testing files for correctness.

    5/ Develop ETL tools for SQL Server and other RDBMS, and conversion tools.

    6/ Campaign for filters for the main office tools.

    7/ When it becomes worthwhile for ordinary folks to use it then publicize it.

    I have certain questions

    Should the standard support merged cells (by row and columns) to support spreadsheet import/export and representation as HTML tables.

    Should a file allow more than one 'result' or table?

    What form should foreign references take?

    How should one specify constraints?

    ...and so on...

    Best wishes,
    Phil Factor