Viewing 15 posts - 1,081 through 1,095 (of 5,504 total)
Grant,
do you really refer to the number of rows being returned?
Example: a query returning 100K rows should benefit from parallelism but the very same query expanded by a TOP 1000...
August 25, 2011 at 7:32 am
chandan_jha18 (8/25/2011)
sorry for not understanding your idea.I apologize. So do you suggest that i should take the select statement with like operator out of the where clause, write it at...
August 25, 2011 at 7:28 am
chandan_jha18 (8/25/2011)
you are right about a table variable holding many rows. I will try to switch that into a temp table with some index on it.
I agree that like '%'...
August 25, 2011 at 6:37 am
There are three major issues that I see:
1) it looks like a catch-all query to cover partial results from a LEFT OUTER JOIN and an INNER JOIN. I would recommend...
August 25, 2011 at 6:03 am
Let's take the query apart:
It basically can be divided into two parts:
Part 1:
SELECT
T.c.value('UserId[1]','INT') as UserId,
...
August 25, 2011 at 5:08 am
Please have a look at the CroosTab article referenced in my signature.
You'll need to add a row number per hostname in order to assign the address to either MacA or...
August 25, 2011 at 4:31 am
Would you please be a little more specific?
Does SQL Server take more memory than the 14GB you allocated?
dbcc checkdb has little to do with the memory allocation.
But it requires some...
August 25, 2011 at 2:39 am
Nice solution!
Avoids the ComputeScalar and Filter operation involved in the UNPIVOT statement...
August 25, 2011 at 1:06 am
Based on Remi's reply:
Scenario 1:
I'd expect table 1 has a covering index on Col1 to Col4, but there are more columns. Changing to t1.* forces either a table scan or...
August 25, 2011 at 12:36 am
Step 1: get rid of the c.ur.s.o.r. *cough*
Script 1: get the data in a temp table to beused by the two separate scripts
select top 100000 guest_key--, guest_id
INTO #to_delete
from guest_0_a
where date_modified...
August 25, 2011 at 12:23 am
Something along those lines?
DECLARE @xml XML
SELECT @xml='<Products>
<Product>
<UserId>174</UserId>
<ISSN>1553-7358</ISSN>
<Authors>
<Author name="Abu Lerche"/><Author name="Jianmin cristy"/><Author name="Rohan Yuste"/></Authors>
<Volume>7</Volume>
</Product>
<Product>
<UserId>176</UserId>
<ISSN>1097-6256</ISSN>
<Authors><Author name="Booker Malik"/><Author name="Mehnab Devid"/><Author name="Yarik Metha"/></Authors>
<Volume>5</Volume>
</Product>
</Products>'
SELECT
T.c.value('UserId[1]','INT') as UserId,
T.c.value('ISSN[1]','VARCHAR(30)') as ISSN,
U.v.value('@name[1]','VARCHAR(30)') as Authors,
T.c.value('Volume[1]','INT') as Volume
FROM @xml.nodes('Products/Product') T(c)
CROSS...
August 25, 2011 at 12:13 am
What??
A completely locked down environment but IE is the default browser? OUCH!!
Speaking of it: since you're connected to the web anyway, the term "locked down" is a little bit misleading......
August 24, 2011 at 4:38 pm
I would start with
ROW_NUMBER() OVER (PARTITION BY Claim_ID ORDER BY Statusdate ) +
ROW_Number() OVER (PARTITION BY Claim_ID,ClaimStatusCode ORDER BY Statusdate desc)
to get a group number per consecutive...
August 24, 2011 at 4:32 pm
I think the confusing part is reusing the f00 column name. In the original table it holds the total per D_SBU, whereas the expected output holds the splitted data.
August 24, 2011 at 2:59 pm
I'm not sure if I overlook the obvious, but wouldn't UNPIVOT help here? (untested)
SELECT D_AC,D_RU,D_SBU,f00_new, D_Ent
FROM
(SELECT D_AC,D_RU,D_SBU,f00,H_EE,H_EJ
FROM OriginalTable) p
UNPIVOT
(f00_new...
August 24, 2011 at 2:47 pm
Viewing 15 posts - 1,081 through 1,095 (of 5,504 total)