I added some comments to your proc, and fixed the layout, formatting, capitalization - all of which was horrifying to begin with. Pick a coding standard and then use it. I don't care what you pick, but use it consistently. I've shown you the "capitalize all keywords" coding style, which I personally hate, but I'm making the point that if it's consistent, even if you hate that style, it's more readable. Also, be careful about your tabs and what you put on each line. DON'T use things you don't need, like the BEGIN and END for the proc itself - completely unneeded and doing nothing but cluttering up the screen. Anyway, the comments explain how the procedure works. This is not a very complicated proc, but the horrible style was making it very hard to read.
CREATE PROCEDURE dbo.getNotificateProduct
@search VARCHAR(1000), -- multi-values ('value1;value2;value3;')
@store VARCHAR(50),
@lan VARCHAR(2) -- only 'FI' and 'EN' are valid here
AS
DECLARE @haku VARCHAR(50) --loop variable to hold single values from @search
-- table for building result set
DECLARE @temp TABLE (ParameterCaption VARCHAR(100), ParameterValue VARCHAR(100))
--if the incoming string doesn't end with a ; then add one
IF SUBSTRING(@search, LEN(@search),1)<>';'
SET @search = @search + ';'
--initialize the loop control variable - this is a "chopping" loop,
--where each iteration consumes a portion of the string and you continue
--until there's no more string to consume (when a semicolon can not be found)
--this is not written in the normal way - it would be better to update a position
--variable, than to continuously update the value of a potentially really long string
WHILE CHARINDEX(';',@search)>0 BEGIN
--grab everything from the beginning of the string to the first semicolon,
--removing leading spaces and converting to upper-case. This is the search string.
SET @haku = UPPER(LTRIM(CAST(SUBSTRING(@search,0, CHARINDEX(';',@search)) AS VARCHAR(50))))
--load rows into the temp table for the current search string,
--using the proper column for the language - two separate queries would actually be faster here
INSERT INTO @temp
SELECT CASE
WHEN @lan ='FI' THEN Material_FI
WHEN @lan ='EN' THEN Material_EN
END AS ParameterCaption,
'[Material DIM].[Material FI].&['+ CAST([DimCubeProduct].[ProductKey] AS VARCHAR(50))+ ']' AS ParameterValue
FROM DimCubeProduct --this is your main table
INNER JOIN FactStoreQualityNotification --this is your join (right) table
ON DimCubeProduct.ProductKey = FactStoreQualityNotification.ProductKey -- a JOIN condition
AND CHARINDEX(CAST(FactStoreQualityNotification.StoreKey AS VARCHAR(5)),@store) > 0 -- another JOIN condition
WHERE (@haku = '') OR -- where clause in the form of "A OR (B OR C)" - do you need the parens?
((@lan='FI' AND CHARINDEX(@haku, UPPER(Material_FI)) > 0)
OR
(@lan='EN' AND CHARINDEX(@haku, UPPER(Material_EN)) > 0))
ORDER BY ProductID
--chop off the 'front' of the string before we re-iterate - this is the "update the loop control variable" step
SET @search = SUBSTRING(@search, CHARINDEX(';',@search)+1, LEN(@search) - 0)
END
SELECT DISTINCT * FROM @temp --wrong - you created the columns up at the top, you can't name them?! Don't use *
--Also, if you loaded the correct data into the temp table you would gain performance by eliminating Distinct keyword
For the inner portion of the loop, I would re-write that to have an "IF @lan='FI'" if-else block, and have a query for each language separately. For the 'FI' language, it would look like this...
INSERT INTO @temp
SELECT distinct Material_FI,
'[Material DIM].[Material FI].&['+ CAST([DimCubeProduct].[ProductKey] AS VARCHAR(50))+ ']'
FROM DimCubeProduct dcp --this is your main table
INNER JOIN FactStoreQualityNotification fsqn --this is your join (right) table
ON dcp.ProductKey = fsqn.ProductKey -- a JOIN condition
AND CHARINDEX(CAST(fsqn.StoreKey AS VARCHAR(5)),@store) > 0 -- another JOIN condition
WHERE (@haku = '')
OR (CHARINDEX(@haku, UPPER(Material_FI)) > 0)
ORDER BY ProductID
This is a query which will be much easier to optimize. Since the old "CASE" portion caused different columns to be retrieved, covering indexes and other issues could cause the optimizer to pick a bad plan. With both queries separated, the optimizer can pick different plans for each one, and that should improve the performance overall. The old query isn't always bad, but it's bad more often than the new style I'm proposing. I used table aliases to make the join conditions easier to read.