My source is a flat file (csv) and I need to process records one by one. At now records are processed in groups of hundreds and my process can't work properly
Then, you've written the process incorrectly. The only time you should write RBAR in any form, is when a 3rd party API requires it and you cannot compel the 3rd party to rewrite the API to accept result sets.
I'd suggest that you reevalute your code and rewrite it to operate in a set based fashion. If you're getting duplicates in a table, I'd also like to suggest that you look into a bit of a refresher course on proper data and table design.
Not trying to bust your chops here... from what you've posted, I see some pretty serious design flaws that will kill all chances of scalability and data integrity. I hate to see people go through that.
is pronounced ree-bar and is a Modenism for R
First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
Although they tell us that they want it real bad, our primary goal is to ensure that we dont actually give it to them that way.
Although change is inevitable, change for the better is not.
Just because you can do something in PowerShell, doesnt mean you should. Helpful Links:
How to post code problemsHow to post performance problemsForum FAQs