Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««3536373839»»

There Must Be 15 Ways To Lose Your Cursors... part 1, Introduction Expand / Collapse
Author
Message
Posted Saturday, June 26, 2010 8:13 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, October 22, 2012 8:55 AM
Points: 124, Visits: 180
RBarryYoung (6/25/2010)
Thomas-282729 (6/25/2010)
Nick Walton (6/25/2010)
I'm really looking forward to this series, as there's been a higher than average number of "Cursors are baaaaaad" posts on SSC recently. Your article seems to indicate that cursors can be eliminated entirely....

I'm really looking for a solution to the following:-

1. How do I execute a stored procedure for each row in a table (temporary or otherwise), which does some very complex calculations (using other tables) and then updates (or inserts if it doesn't already exist) a row in another table (a common operation for month-end tasks, for example), or if spooling out to external consumers, such as Broker peers?

I don't believe SQL Server 2005 allows a) an sp to be executed per-row or b) a function to update a table...

Sadly, logistics and transfer speeds make the option of external processing not possible.

I hope that one of your articles covers this situation, as I'd love to get rid of all these cursors.


There is an answer, but you probably won't like it: don't do it in a stored procedure. If you cannot accomplish a given piece of work in a set-based format, it was probably not meant to be done in a database. ...


Actually there is a much better answer, rewrite your stored procedures so that instead of needing to be called row-by-row, they can operate on your entire rowsets at one time.


Thanks for your response. Believe me, I've tried, but there are some things which just seem mutually exclusive. 4th Normal Form vs. rapid reporting. Iterative processing vs. set-based programming. Anything to do with finance vs. any rational logic. It's just one of those things.

The previous posts' example of asymptotic tax calculation isn't unique, or even rare. I'm looking forward to reading about any SQL tricks I've not come across, however, as performance improvement takes up a lot of my time. :)

Best Wishes,

Nick


Post #943453
Posted Saturday, June 26, 2010 8:41 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Today @ 2:08 PM
Points: 8,844, Visits: 9,405
Thomas-282729 (6/25/2010)
Nick Walton (6/25/2010)
I'm really looking forward to this series, as there's been a higher than average number of "Cursors are baaaaaad" posts on SSC recently. Your article seems to indicate that cursors can be eliminated entirely....

I'm really looking for a solution to the following:-

1. How do I execute a stored procedure for each row in a table (temporary or otherwise), which does some very complex calculations (using other tables) and then updates (or inserts if it doesn't already exist) a row in another table (a common operation for month-end tasks, for example), or if spooling out to external consumers, such as Broker peers?

I don't believe SQL Server 2005 allows a) an sp to be executed per-row or b) a function to update a table...

Sadly, logistics and transfer speeds make the option of external processing not possible.

I hope that one of your articles covers this situation, as I'd love to get rid of all these cursors.


There is an answer, but you probably won't like it: don't do it in a stored procedure. If you cannot accomplish a given piece of work in a set-based format, it was probably not meant to be done in a database. Stored procedures were not meant to encapsulate complex business logic of the type you suggest. Databases are first and foremost meant to serve up data. Further, if there is so much data that streaming it to an external service is impractical, then I would suggest that it is also impractical to do it row-by-row in a stored proc. If you can do it row-by-row in a proc, then you can do it row-by-row in an external service.

That answer sort of scares me - it seems to assume that because something row-oriented has been written one should assume that whatever it is is too complicated to do in SQL. I would approach it very differently, because I know full well that starting from a row-oriented Stored Procedure and assuming that that row-oriented SP can exist unchanged in a set-oriented solution is pure nonsense.

The row oriented SP computing a new/updated collection of values based on a single row of one of the source tables can be re-jigged to be a set oriented procedure producing a set of new/updated collections of values based on the set of rows in that base table; this is usually fairly trivial to do, and won't involve an explicit cursor. This can then be taken a step further: based on the set produced, a couple of lines can be written to use it to do the updates and inserts, and that can be included in the stored procedure.

So now there's a single set-oriented SP which deals with the whole base table instead of with a single row. There was no need to map an row-oriented SP across all the rows of the table, in fact trying to do that would be pointeless because it retains the row-oriented nature of the solution, what's needed is to get rid of the row-orientation altogether.

Barry covers moving from single row processing to set-oriented processing at an elementary level in part two of the series, it is easy to take what he's written there and generalize it to a situation like this, since it makes no difference at all that the processing code inside the loop is expressed as a stored procedure.


Tom
Post #943454
Posted Saturday, June 26, 2010 8:58 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Today @ 2:08 PM
Points: 8,844, Visits: 9,405
Nick Walton (6/25/2010)

DBA: "So what's your solution?"
Dev: "I will pass in an XML file with a full list of all the record IDs which need to be processed, and the database will expand it and process each of the rows, and return a total."
DBA: "No, that's silly, and will require a Cursor, which is *bad*"


(a) the Dev is unhelpful suggesting an XML file (verbose junk that is slow to parse) instead of a newline-separated list here (unless the record IDs can include newlines).

(b) the DBA is not very competent if he thinks he needs to use a Cursor and go row by row procedural just because he is given a list of the records that have to be processed - evidently not really up to set-oriented thinking. Importing a list of IDs into a table and using that table as one of the sources in a query surely isn't all that bizarre and difficult to conceive a course of action?


Tom
Post #943458
Posted Saturday, June 26, 2010 9:58 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, September 12, 2014 7:03 PM
Points: 89, Visits: 481
Tom.Thomson (6/26/2010)
Thomas-282729 (6/25/2010)
Nick Walton (6/25/2010)
I'm really looking forward to this series, as there's been a higher than average number of "Cursors are baaaaaad" posts on SSC recently. Your article seems to indicate that cursors can be eliminated entirely....

I'm really looking for a solution to the following:-

1. How do I execute a stored procedure for each row in a table (temporary or otherwise), which does some very complex calculations (using other tables) and then updates (or inserts if it doesn't already exist) a row in another table (a common operation for month-end tasks, for example), or if spooling out to external consumers, such as Broker peers?

I don't believe SQL Server 2005 allows a) an sp to be executed per-row or b) a function to update a table...

Sadly, logistics and transfer speeds make the option of external processing not possible.

I hope that one of your articles covers this situation, as I'd love to get rid of all these cursors.


There is an answer, but you probably won't like it: don't do it in a stored procedure. If you cannot accomplish a given piece of work in a set-based format, it was probably not meant to be done in a database. Stored procedures were not meant to encapsulate complex business logic of the type you suggest. Databases are first and foremost meant to serve up data. Further, if there is so much data that streaming it to an external service is impractical, then I would suggest that it is also impractical to do it row-by-row in a stored proc. If you can do it row-by-row in a proc, then you can do it row-by-row in an external service.

That answer sort of scares me - it seems to assume that because something row-oriented has been written one should assume that whatever it is is too complicated to do in SQL. I would approach it very differently, because I know full well that starting from a row-oriented Stored Procedure and assuming that that row-oriented SP can exist unchanged in a set-oriented solution is pure nonsense.


No. That is not what I'm saying. I'm saying that if you have something that is row-oriented that can't be written set-oriented or requires doing dynamic SQL in the stored procedure, it does not belong in a stored procedure. T-SQL was not designed to encapsulate complex business logic. Middle-tier components and their languages are substantially better at that.


The row oriented SP computing a new/updated collection of values based on a single row of one of the source tables can be re-jigged to be a set oriented procedure producing a set of new/updated collections of values based on the set of rows in that base table; this is usually fairly trivial to do, and won't involve an explicit cursor.


The row oriented SP computing a new/updated collection of values based on a single row of one of the source tables OFTEN can be re-jigged to be a set oriented procedure producing...

I've seen too many examples of people that have tried to encapsulate too much into stored procedures. Eventually, then end up with a series of heinously long stored procs that encapsulate business logic on a single set of data and run that proc using a cursor over a series of rows because the logic is too complex to put into a single set-based proc. That is what business layer components and services are meant to solve. T-SQL was not meant to replace a business tier or layer.
Post #943461
Posted Saturday, June 26, 2010 10:26 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Monday, September 15, 2014 5:20 PM
Points: 1,276, Visits: 1,134
Steve-524674 (4/14/2009)
OK, the issue is simply one of coding style for the most part. The optimizer should handle "declarative" code [whatever that is] in much the same way that it would handle cursors or the product [database] is not built right.

I write cursors all the time without any appreciable difference from other code. It's all basic procedural programming. There is no declarative and cursor-driven difference.



This is absolutely untrue, and indicates that (1) you haven't actually compared the performance difference, or (2) your data sets are extremely small in which case the wall-clock difference may be negligible. Try looking at the query plans to see how many resources SQL has to throw at your queries to make them run. Wall clock speed is not the only aspect that needs to be considered.

When you use a cursor you are explicitly overriding SQL Server's built-in ability to optimize execution of your code. You're also not going to scale well with a cursor since performance is generally linear. 10X more iterations of the cursor = 10X worse performance.

Don't get me wrong, I personally feel there are some uses for cursors, but they are few and far between. My general rule of thumb is:

(1) If the dataset is extremely small and must be acted on in a predefined order (many common DBA tasks fall into this category; sending administrative emails or managing indexes are such tasks), or

(2) The business logic required is so complex it's infeasible to do it without row-by-row processing (generally these are prime candidates to do in a more optimal language or tool like an ETL tool or OO language).

A lot of people take reason #2 above as free license to write *everything* in cursors and poor man's cursors (loops) because they don't understand declarative programming. Many of those try to apply object-oriented design patterns to SQL, which is not designed to handle such design patterns in an optimal fashion. Cursors are a hammer and not every problem is a nail.

Thanks
Mike C
Post #943465
Posted Saturday, June 26, 2010 10:34 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, September 12, 2014 7:03 PM
Points: 89, Visits: 481
Nick Walton (6/25/2010)
Thomas-282729 (6/25/2010)

DBA: A. Why is it my problem that someone else did not do a better job of planning ahead? B. why are you pulling one record at a time instead of pulling a batch of records? C. Why not write a small app that batches and processes the data?


I agree, I've been there, so I can give you the responses.

a. Because you're holding the project up. It would be on-schedule if you didn't just let the developers write the code. there's no Right and Wrong answer. Just write an SP to do it, and the project can get delivered.


To this I would answer, that if the project's delivery rests entirely upon a last minute hacked solution, it was doomed from the start. If this isn't a last minute hack, then I would think that any solution should consider aspects such as speed, accuracy, reliability, extensibility and maintainability. Given that, doing this in a small app or middle-tier component rather than row-by-row in the database (assuming that is required) will win on all those fronts. Is it that the developer wants to intentionally slow down the project, create one that is harder to test and has more cumbersome reuse and is harder to maintain?

Unfortunately, this scenario is all too common. The developers wait until the last second and then decide that the "only" solution is to build a poor solution. If the problem is really set-based, then yes doing a one-time procedure shouldn't be an issue. If this is an on-going problem, and "must" be done row-by-row, then doing it in the business tier is a better solution and that should be everyone's goal I would think.


b. We're not. We're batch processing the records, however each row requires handling in a different, and complex way depending on the data which 'hangs off' it. There are up to 300 different 'events' which can fire depending on the processing logic.


If you can encapsulate the logic into a cursor, then you can encapsulate it into middle-tier code that will do the same *and* you get a bunch of other bonus features such as better testing tools (we are going to test it right?), parallel processing and so on.


c. that would require development time which we don't have. It wasn't specified *whinge whinge*

In defence of point c, The app option is a red herring anyway, cos it would be external to the DB, and would still have to pore through each row, do the very specific logic to each row, requiring the pulling-in of yet more data. The pull/process/push time would be rather extensive. Even with 1000 simultaneous connections (assuming the DB didn't fall over due to locking on the tables), it'd still take more than 10 times longer to complete that way, and would be yet another service to have to monitor, document, maintain, etc.


I think we also need to differentiate between a one-off emergency, never to be used again solution vs a baked-in part of the system architecture solution. I'm talking about the later. In the case of the former, since the goal is still to find the fastest, best solution, it would probably still be faster to write an app to do it simply because of testing reasons.

RE: Connections, it wouldn't be 1000 simultaneous connections. It would be more like 10-100 depending on the size of the data and the amount of information being pulled. You can pull a million rows rather quickly to a middle-tier component. Remember that connections are dropped once you pull a batch of data to process and you are presumably using something that uses connection pooling like ADO or ADO.NET which is reusing those connections.

The problems with locking will still exist in the cursor solution *and* you don't get parallel processing. I.e., the whole process would be forced to be run in serial.

RE: Monitoring and time to develop

The claim that it would be slower to develop, IMO is a factor of expertise. If you have developers that know how to build small components or utility apps, it should in fact be much faster than trying to do in stored procs for the simple fact that there are better testing tools for these types of applications and services. In addition to all that, even if it is a cursor solution in SQL Server, it still needs to be monitored, documented and maintained. That isn't going to change.

I realize that you agree and are playing devil's advocate. I think in many of these cases, it comes down to understanding the alternatives and the developer expertise. If the only thing available is a crack DBA and a bunch of inexperienced or undisciplined developers, then it might be the case that a cursor solution (assuming that is actually required) is the "better" solution. Of course, in that scenario, a better HR person would also be a better solution :).
Post #943467
Posted Saturday, June 26, 2010 10:46 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Monday, September 15, 2014 5:20 PM
Points: 1,276, Visits: 1,134
ehaustin78737 (6/25/2010)
I agree with Brad Neufeld this article bites. Another condescending rant.


Everyone appreciates such specific criticism. It helps the author fine-tune his style, lists plenty of well-defined arguments that provoke thoughtful consideration, and rises far above the level of the "background noise" generated by (for instance) a YouTube "I Hate!" video.

I thank you on behalf of the author.

Mike C.
Post #943469
Posted Saturday, June 26, 2010 11:10 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Monday, September 15, 2014 5:20 PM
Points: 1,276, Visits: 1,134
Nick Walton (6/25/2010)
Thomas-282729 (6/25/2010)

DBA: A. Why is it my problem that someone else did not do a better job of planning ahead? B. why are you pulling one record at a time instead of pulling a batch of records? C. Why not write a small app that batches and processes the data?


I agree, I've been there, so I can give you the responses.

a. Because you're holding the project up. It would be on-schedule if you didn't just let the developers write the code. there's no Right and Wrong answer. Just write an SP to do it, and the project can get delivered.


"...there's no Right and Wrong answer." --> I disagree with the "everyone gets a trophy" mentality.

I think a more complete assessment is that the "Right" SP that seems to run well today when your table has 1,000,000 rows is going to run terribly a year from now when it grows to 10,000,000 rows. It's going to push out your processing time by delaying other resource-intensive processes and cause you to miss your SLAs. The customer will complain and you'll go and try to optimize that cursor in the SP to shave off a couple of seconds.

The more accurate (...and just as satisfying) answer might be that there's no need to do it "Right" the first time because we can fix it later when it causes problems. Besides, the ongoing project maintenance costs come from a different budget.


b. We're not. We're batch processing the records, however each row requires handling in a different, and complex way depending on the data which 'hangs off' it. There are up to 300 different 'events' which can fire depending on the processing logic.


When you say 'events' which can fire depending on processing logic, SQL is not the development tool of choice that comes to mind. Rather I think of procedural languages that natively support user-defined event handling. It would be interesting to see the performance affects of a custom-built user-defined event-handling framework built completely in T-SQL.


c. that would require development time which we don't have. It wasn't specified *whinge whinge*


More than that this type of project generally requires planning time which many companies aren't willing to invest in. I've read studies by IBM and others that peg the cost of fixing broken solutions at 100's to 1000's of times the cost of doing it "Right" the first time, even calculating in the cost of proper planning and design. But many managers don't care since the code fixes are often hidden away in a different budget after delivery.


In defence of point c, The app option is a red herring anyway, cos it would be external to the DB, and would still have to pore through each row, do the very specific logic to each row, requiring the pulling-in of yet more data. The pull/process/push time would be rather extensive. Even with 1000 simultaneous connections (assuming the DB didn't fall over due to locking on the tables), it'd still take more than 10 times longer to complete that way, and would be yet another service to have to monitor, document, maintain, etc.


Some tasks, such as row-by-row processing, are often better done in other tools. SSIS and other ETL tools are prime examples of tools that are optimized to act on individual rows at a time.
Post #943472
Posted Saturday, June 26, 2010 12:45 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 1:45 PM
Points: 37,104, Visits: 31,659
jeffwilson2 (6/25/2010)
Hi my name is Jeff.
I am new to this forum. About a year ago I started at a company and last year they upgraded to SQL 2005 and nearly every procedure uses a cursor, in some of the most bizarre circumstances. I have been trying to rewrite them as I have had to modifiy them, but some of them are so embedded in other processes or so long and confusing with cursors inside other cursors that I hesitate to do so.

But I have come across a scenario that i am at a loss on how to avoid a loop. And maybe this is an exception to the rule about using loops.
Any help or advice would be appreciated.

The purpose of the procedure is to back into the SDI tax rate when the total SDI Tax has exceeded it's limit, so the new net Tax value can be recalculated. It usally has to loop through the process 5-10 times to return.
See everytime the SDI tax rate changes the Total net tax changes. But SDI tax i based on the Net Total Tax.
So everytime you adjust the tax rate so that the SDI TAX is at the limit, then the NetTotal Tax changes and the SDI rate is not accurate, But every time you make the adjustment you get closer to the SDI tax.

I realize that this is circular logic, but it does retrun the accurate data because the SDI rate keeps getting closer and closer to the correct rate needed to set the SDI tax at its limit.


It's a "memory-only" loop and isn't likely going to be a performance problem (except that such loops can't be used in an iTVF). What will be a performance problem is the fact that it's a mlTVF (multi line table valued function) which can be nearly as slothful as a scalar function. My question would be how many times do you use the function in a given proc (ie... against how many rows)?

My other question would be... what happens if it "hits" the correct number on the 5th iteration but we make it run for 10 iterations? Will it still come in on the correct answer (not taking the time to analyze the algorithm just yet)?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #943483
Posted Saturday, June 26, 2010 3:55 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Monday, September 15, 2014 5:20 PM
Points: 1,276, Visits: 1,134
jeffwilson2 (6/25/2010)
Hi my name is Jeff.
I am new to this forum. About a year ago I started at a company and last year they upgraded to SQL 2005 and nearly every procedure uses a cursor, in some of the most bizarre circumstances. I have been trying to rewrite them as I have had to modifiy them, but some of them are so embedded in other processes or so long and confusing with cursors inside other cursors that I hesitate to do so.

But I have come across a scenario that i am at a loss on how to avoid a loop. And maybe this is an exception to the rule about using loops.
Any help or advice would be appreciated.

The purpose of the procedure is to back into the SDI tax rate when the total SDI Tax has exceeded it's limit, so the new net Tax value can be recalculated. It usally has to loop through the process 5-10 times to return.
See everytime the SDI tax rate changes the Total net tax changes. But SDI tax i based on the Net Total Tax.
So everytime you adjust the tax rate so that the SDI TAX is at the limit, then the NetTotal Tax changes and the SDI rate is not accurate, But every time you make the adjustment you get closer to the SDI tax.

I realize that this is circular logic, but it does retrun the accurate data because the SDI rate keeps getting closer and closer to the correct rate needed to set the SDI tax at its limit.

Here is a function that I use a loop for.
CREATE  FUNCTION dbo.fn_TaxLimitCalcTable
(
@startVal decimal(25,18),
@LimitVal decimal(25,18),
@YTDVal decimal(25,18),
@NetBenefit decimal(25,18),
@Fed decimal(25,18),
@staterate decimal(25,18),
@Soc decimal(25,18),
@Med decimal(25,18),
@CurRate decimal(25,18)
)

RETURNS @TaxLimit TABLE
(
NewSDIRate decimal(25,18),
adNetval decimal(25,18),
adjustedval decimal(25,18),
Fed decimal(25,18),
Med decimal(25,18),
staterate decimal(25,18),
Soc decimal(25,18)
)


AS
/****************************************************

SP Name: fn_TaxLimitCalcTable

Description: This function re-calculates the associated tax and rates when
the maximum tax limit has been reached or exceeded.

****************************************************/

BEGIN

DECLARE @w2amt1 decimal(25,18), @NewSDIRate decimal(25,18),
@adjustedval decimal(25,18), @adNetval decimal(25,18),@Returned decimal(25,18),
@Fed1 decimal(25,18),@Med1 decimal(25,18), @staterate1 decimal(25,18),@Soc1 decimal(25,18)
SELECT @NewSDIRate = @CurRate,
@adjustedval=@startVal,
@adNetval=@NetBenefit*(1/(1-(@CurRate+@Fed+@staterate+@Soc+@Med))) --set if startvalue matches

IF @YTDVal > 0 and @LimitVal > 0
BEGIN
WHILE @adjustedval <> (@LimitVal-@YTDVal)
BEGIN

SELECT @NewSDIRate = (@LimitVal-@YTDVal)/@adNetval
SELECT @adNetval = ROUND(@NetBenefit*(1/(1-(@NewSDIRate+@Fed+@staterate+@Soc+@Med))),2)
SELECT @adjustedval = ROUND(@NewSDIRate * @adNetval,2)
END
SELECT @Fed1 = round(@Fed * @adNetval,2)
SELECT @Med1 = ROUND(@Med * @adNetval,2)
SELECT @staterate1 = ROUND(@staterate * @adNetval,2)
SELECT @Soc1 = ROUND(@Soc * @adNetval,2)

END

-- Return the result of the function
insert @TaxLimit
select
@NewSDIRate,
@adNetval,
@adjustedval,
@Fed1,
@Med1,
@staterate1,
@Soc1
RETURN
END


GO


Thank you,
Jeff


I'm not 100% sure about your calculation here. SDI tax is usually set at a certain % by the state with a limit on the amount of taxable income. For CA, for instance, I think the rate is 1.1% on (up to) the first ~$90,600 paid. So if the employee has paid their 1.1% SDI on the first ~$90,600 of pay then the rate is now 0.0% -- no need to back into it.

But all that aside, can you provide 4 or 5 sample data inputs and expected outputs for this function?

Thanks
Mike C
Post #943505
« Prev Topic | Next Topic »

Add to briefcase «««3536373839»»

Permissions Expand / Collapse