I hit my head on a strange behaviour of SQL Server: I have a large query fired from a .NET application. First, I realized, that .NET gets nothing back, while my test in Management Studio returned rows. Then I looked at the query in detail, reshaped them and found the difference:
The declare statement from .NET has sizes set for the varchar parameters, while my test in SSMS has not. That is the only
If i declare the varchar parameters in my testquery with size (.NET sets the size to the length of the value-string), the query also returns empty set. Independently of the size (even if I set them to the size the columns have in the tables or to max or the default size of varchar when not declared).
Also interesting is that it depends on the paramter: I can remove the size setting of one or two of two varchar parameters and the behaviour changes. Size setting of the third varchar parameter is not influencing the result. Difference: This third parameter concerns another table from the joins of the query. If I remove parts of the join (i.e. the table with the third varchar parameter, I always get a result.
It does also not depend if I execute the query by sp_executesql or declaring the variables and then running the T-SQL directly.
And another dependency: If I change the values of the parameters, the query gives results in both variants. I'm very confused!
Can anybody explain this change in behaviour? Why doesn't SSMS warn me about that if it si prevented from returning values by the size setting? Why does this change the behaviour?
Are there any problems with querylength? But then again, why doesn't SSMS warn about that? If I really have not to give sizes: How do I tell .NET not to set them? Because in the code I only give type varchar.
At the moment I cannot post the query itself because of table-, column- and parameternames representing our logic, but If needed I can rename these, still keeping length of the names in case of this beeing the problem?
Thanks in advance for any suggestions or other participation,