Hash Partitioning

  • Hi All,

    I am in situation where I am not sure how to divide / Partition the table, and send the Data to Multipple File Group / Files.

    So I want to implement the Hash Partition as we do for the Table as we do for the Tables in Oracle.

    Please need your pointers.

    Thanks & Regards,

    Sudhir Nune.

  • Pick a column in your table and create a persisted computed column off it using the HASHBYTES function:

    ALTER TABLE dbo.[TableName]

    ADD [HashedColumnName] AS CAST(HASHBYTES('MD5',CAST([ExistingColumnName] AS VARCHAR(40))) AS VARBINARY(40))

    PERSISTED -- important, computed column must be persisted to be used a partition column

    Creating Partitioned Tables and Indexes

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • I have said it many times before and will say it again. Table Partitioning in SQL Server is a very complex subsystem with lots of requirements, conditions, gotchas, etc. You have very little chance of success if you just read a few forum/blog posts and Books Online and slap it into your system - even if you have extensive Oracle experience. I can't tell you the number of clients and forum people that have gotten themselves into a mess, or at best increased complexity in their system for no benefit, from doing partitioning on their own. And most don't actually NEED partitioning in the first place. Please consider getting a professional on board to help you understand what SQL Server Partitioning is and is not, review your requirements, and move you forward in the best direction successfully.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thanks for the reply, I will try to do as you said.

  • You are highly unlikely to achieve anything other than adding complexity and wasting time. Partitioning is not a performance optimisation (mostly), it's for ease of maintenance, loading and deleting rows.

    Partitioning a table so that is it useful requires a careful choice of partitioning column based on the data and the queries, and a semi-random hash is highly unlikely to be useful.

    Honestly, don't go this route.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (5/19/2012)


    You are highly unlikely to achieve anything other than adding complexity and wasting time. Partitioning is not a performance optimisation (mostly), it's for ease of maintenance, loading and deleting rows.

    Partitioning a table so that is it useful requires a careful choice of partitioning column based on the data and the queries, and a semi-random hash is highly unlikely to be useful.

    Honestly, don't go this route.

    What about for tables with no natural partitioning key and a low cardinality on all candidates?

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • opc.three (5/19/2012)


    GilaMonster (5/19/2012)


    You are highly unlikely to achieve anything other than adding complexity and wasting time. Partitioning is not a performance optimisation (mostly), it's for ease of maintenance, loading and deleting rows.

    Partitioning a table so that is it useful requires a careful choice of partitioning column based on the data and the queries, and a semi-random hash is highly unlikely to be useful.

    Honestly, don't go this route.

    What about for tables with no natural partitioning key and a low cardinality on all candidates?

    I agree with Gail. It won't help for performance and it won't help with maintenance because the hash partitioning would mean that new rows added have just as much likelyhood of being added to one partition as another. It's no better than partitioning on something like NEWID(). IMHO, hash partitioning isn't just a waste of time, it's a move in the wrong direction. You're adding complexity to make things worse. It's kind of like using NEWID() as the clustered index of a table except it's worse.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • So would you say this is simply one area where the concept does not transfer from Oracle to SQL Server? Oracle touts partitioning as a way to improve performance by allowing the table 'to be managed and accessed at a finer level of granularity'.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • On the point of hash partitioning, correct. If you just want to spread a table over multiple drives, create multiple files in the filegroup and SQL will do it's proportional fill and spread the data without you having to do anything.

    Partitioning's great for improved maintenance (rebuilding individual partitions, partition switch for data load or removal), but it requires careful planning and a good knowledge of the data.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Partioning is a great way to reduce index maintenance. It might even help the performance of certain types of queries. However, and I've not done it in Oracle, I can't see how "hash partitioning" will help anything because of the things I mentioned. It's about as effective as partitioning on NEWID() or any other random number.

    [EDIT] The only place I can see where it might help is to spread the load across drives as Gail just mentioned. What would be a much larger help is to fix the data and give it some sort of reasonable partitioning column.

    Consider the following example... we have some very easily sortable data that, under normal conditions, would appear in a single partition (say, the "A" partition out of 26 lettered partitions).

    SELECT HASHBYTES('SHA1','AAAAA') UNION ALL

    SELECT HASHBYTES('SHA1','AABAA') UNION ALL

    SELECT HASHBYTES('SHA1','AACAA') UNION ALL

    SELECT HASHBYTES('SHA1','AADAA') UNION ALL

    SELECT HASHBYTES('SHA1','AAEAA')

    Now, if we look at the HASHBYTES output of that, you'll easilly see that having similar data means nothing to HASHBYTES.

    0xC1FE3A7B487F66A6AC8C7E4794BC55C31B0EF403

    0x9F97773310D994CBEFDB584A0D1883EDBF610B28

    0x81778503C0258525DBE9F4D999CCB46EEB25379A

    0xE2A9D1EB98CFED14F12629EEF20B1A44B6E4E96A

    0x2F45DA2C2D16CCC875CC308C445A070D5957B3B2

    There are 5 items and the hashbytes all start with a different character. How is this going to help anything by partitioning? All 5 partitions got updates. That means that 5 indexes also got updated and you still have to go across all 5 partitions to return the 5 pieces of data that start with "AA".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • opc.three (5/19/2012)


    So would you say this is simply one area where the concept does not transfer from Oracle to SQL Server? Oracle touts partitioning as a way to improve performance by allowing the table 'to be managed and accessed at a finer level of granularity'.

    In my Oracle experience(*) the only scenario where table partitioning improves performance is when:

    a) Query does a Full Table Scan

    AND

    b) Partitioning allows to restrict the Full Table Scan to one (or a few) partitions

    Either way, facing such a situation I would ask... Why a query is attempting a Full Table Scan on a zillion rows table? Answer is, it shouldn't; re-work the query, re-work the indexing strategy.

    (*) I'm pretty sure the same applies to SQL Server

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • What I see a lot of people chasing when thinking about implementing partitioning is 'partition parallelism'. The idea that if a query asks for data from multiple partitions were parallelized then SQL Server would be able to retrieve those results in a more efficient manner than if the data were in a non-partitioned table, especially if those partitions are placed on different IO subsystems.

    In the case of using a hash as a partition key, or NEWID() which I expect would have a similar net effect, the random nature of the data element is what makes it useful in distributing of data to multiple partitions. As for whether it will be as good at distributing data to partitions as the proportional fill algorithm is at distributing data to multiple files in a file group, I doubt it could be since the former is random and data-dependent and the latter is controlled by the storage engine.

    I can respect the notion that partitioning is not required for the entry-level use case and that knowing the data and query practices in your environment is critical to reaping the benefits of implementing partitioning. I can also respect the notion that asking the type of question the OP asked in an online forum setting says something beyond just the question, but if done right is chasing 'partition parallelism' just chasing cars?

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Jeff Moden (5/19/2012)


    IMHO, hash partitioning isn't just a waste of time, it's a move in the wrong direction. You're adding complexity to make things worse. It's kind of like using NEWID() as the clustered index of a table except it's worse.

    I'm not going to suggest hash partitioning or NEWID() keys are right for many systems, but the answer is very much "it depends", and these links explain why:

    Hash partitioning:

    Resolving PAGELATCH Contention on Highly Concurrent INSERT Workloads

    NEWID() keys 100 times faster than IDENTITY and NEWSEQUENTIALID():

    Boosting INSERT Speed by Generating Scalable Keys

  • It's interesting and I'll have to test it, of course. Thank you very much for the links.

    The articles are missing a lot of information and even have some perceived errors in them (talking about page splits with no indexes on the given test table, for one example). They've done their job in wetting my appetite for more information and they could be 100% in their statements and "observations" but I wouldn't use them as any kind of a Litmus strip in their current form.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (5/20/2012)


    The articles are missing a lot of information and even have some perceived errors in them (talking about page splits with no indexes on the given test table, for one example).

    The missing index (CREATE CLUSTERED INDEX MyBigTable_cl ON MyBigTable (c1);) is defined in Paul Randal's original challenge. Thomas does link to that post in the first paragraph. The omission is unfortunate, but it probably does makes sense to read the post Thomas is replying to first.

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply