Finding maximum value out of 5 different columns

  • The Dixie Flatline (6/21/2011)


    Michael, if I may, you are being nit-picky at this point. 😛 Let's not assume the columns hold INT values. If we want to handle BIGINT columns, we really should replace the zeroes in the isnull function with -9,223,372,036,854,775,808.

    Seriously, the OP had already said he had a solution, I just threw a couple of other approaches out there for kicks. Of course, I've taken someone else to task for doing the same thing, so I will take my beating like a man.

    Thank you for the link though. I will be sure to read it.

    I don't think I was being "nit-picky" to point out that a solution may not work with NULL or negative numbers. I have come across them several times in real world data. 🙂

    However, my real reason for posting was to point out the true complexity of coding a CASE based solution with a large number of columns vs. the simplicity of coding a solution with a union subquery (the method the OP chose), and the need to test the performance of each to see which is the best solution.

    Also, as I pointed out on the link, the subquery solution is a good way to test that you have correctly coded a CASE solution by comparing the results using a large set of random data.

    I noticed no one suggested the true solution: Change the table design to hold the values in one column with multiple rows. This problem of MIN/MAX/AVG is the primary example I use when someone suggests creating a table like this. "Can you write a simple query to find the maximum value in the table across all ten columns?" That usually stops them in their tracks. If they get past that, I hit them with "Can you write a query to find the average value in the table across all ten columns when the value is between 60 and 100?"

  • However, my real reason for posting was to point out the true complexity of coding a CASE based solution with a large number of columns vs. the simplicity of coding a solution with a union subquery (the method the OP chose), and the need to test the performance of each to see which is the best solution.

    Fair enough. Judging from the results you posted in your link, I think we are in agreement that a CASE solution performs better than the UNION subquery. Admittedly CASE gets cumbersome as the number of columns increases, but it remains fast. If asked to solve this problem for a row with 100 columns, it would probably be safest to use dynamic SQL to read the table schema and generate the CASE statements.

    With respect to the changing of the schema of the underlying table, I hesitate to propose those kinds of changes because in my experience this kind of row layout is usually output from some non-database application that the OP is trying to work with. If you look, the first solution posted for fun "unpivots" the columns into rows, and then takes a max. If the OP decided to make a permanent change to his table structure, that code could be easily modified to build the new table for him.

    __________________________________________________

    Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
    Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills

  • The Dixie Flatline (6/22/2011)


    However, my real reason for posting was to point out the true complexity of coding a CASE based solution with a large number of columns vs. the simplicity of coding a solution with a union subquery (the method the OP chose), and the need to test the performance of each to see which is the best solution.

    Fair enough. Judging from the results you posted in your link, I think we are in agreement that a CASE solution performs better than the UNION subquery. Admittedly CASE gets cumbersome as the number of columns increases, but it remains fast. If asked to solve this problem for a row with 100 columns, it would probably be safest to use dynamic SQL to read the table schema and generate the CASE statements...

    Actually, I though the results were a little more ambiguous on performance than a simple CASE is faster. It looked like CASE was slower under some situations with smaller numbers of rows, which is why I said they should test to see which is faster. I never tested with more than 4 columns, but I suspect that CASE may have more of a disadvantage in that situation, because the code is more complex.

  • Isn't it as simple as below:

    select

    case when col1>= col2 and col1>=col3 and col1>=col4 then col1

    when col2>= col1 and col2>=col3 and col2>=col4 then col2

    when col3>= col1 and col3>=col2 and col3>=col4 then col3

    else col4

    end as maxvalue

    from [Table] A

    Thanks

  • Zahid Naseer (7/3/2014)


    Isn't it as simple as below:

    select

    case when col1>= col2 and col1>=col3 and col1>=col4 then col1

    when col2>= col1 and col2>=col3 and col2>=col4 then col2

    when col3>= col1 and col3>=col2 and col3>=col4 then col3

    else col4

    end as maxvalue

    from [Table] A

    Thanks

    This is a very old post that you are commenting on.

    However, you should read through the whole thread for discussion of alternative solutions and the impact of null values on your case solution.

  • Thought someone may still be looking for a simple answer.... If the table has NULLs then the following simple fix will work....

    Declare @min-2 INT

    --assign minimum possible for the column datatype

    SET @min-2= -2,147,483,648

    select

    case when col1 is not null and col1>= isnull(col2,@MIN) and col1>=isnull(col3,@MIN) and col1>=isnull(col4,@MIN) then col1

    when col2 is not null and col2>= isnull(col1,@MIN) and col2>=isnull(col3,@MIN) and col2>=isnull(col4,@MIN) then col2

    when col3 is not null and col3>= isnull(col1,@MIN) and col3>=isnull(col2,@MIN) and col3>=isnull(col4,@MIN) then col3

    else col4

    end as maxvalue

    from [Table]

  • I realize this is an older thread, but since it got woken up from it's cave, what's being asked for here is a GREATEST() function.

    There is a thread and some research over here about that, along with some further discussion on it, if interested: http://www.sqlservercentral.com/Forums/Topic1447802-392-1.aspx


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply