Viewing 15 posts - 376 through 390 (of 1,554 total)
Don't think so.. Couldn't get any 'shorthand' to work anyway...
OTOH, I don't see that the 'correct' syntax should look so overwhelmingly long...?
alter table test alter column col1 datetime null
alter table...
December 12, 2006 at 7:39 am
Why do you want to do this, and what should the user be told? ...and if told, what should the user then do?
Being put on hold due to locking, isn't...
December 12, 2006 at 7:31 am
The wildcard works just the same with LIKE as PATINDEX (you can see the example above)
For ranges, you just specify the allowed/disallowed chars you want to filter on one after...
December 12, 2006 at 1:40 am
It does seem like LIKE and a wildcard could solve it...?
This looks like it's working anyway..
create table #x ( a varchar(20) not null )
insert #x
select 'abcde' union...
December 11, 2006 at 8:54 am
Do you know how that algorithm works?
(as I have no clue..)
/Kenneth
December 11, 2006 at 7:26 am
For starters... It seems like you don't want all data in your table in the file...?
If that is the case, what do you want to use to select the rows...
December 11, 2006 at 7:18 am
...and how is just 'stored procedure' the 'solution'...?
The solution to what?
December 11, 2006 at 7:13 am
It would probably be easiest if you could post some samples of your data. That way we can see what can or cannot be done with it.
/Kenneth
December 8, 2006 at 9:22 am
It's nothing very complicated about it.
DISTINCT and GROUP BY are just two ways of essentially producing the same result - duplicates are removed.
Since your query produces many duplicates in the...
December 8, 2006 at 9:15 am
select t1.field1, t2.field3, t2.field4
from table1 t1
inner join table2 on t2.field3 = t1.field1
where t1.field1 = 'char 200'
(gives you many dupe rows)
or.....
select DISTINCT t1.field1, t2.field3, t2.field4
from table1 t1
inner join table2 on t2.field3...
December 8, 2006 at 9:07 am
You can use a command promt and bcp.
bcp "select * from myTable where foo between 35 million and 70 million" queryout c:\myfile.txt -c -t; -T
(you get the picture)
There's more info...
December 8, 2006 at 8:58 am
Mmm.. slippery bastards those rounding errors..
It does indeed seem like your case variant is the most straightforward and accurate way.
/Kenneth
December 8, 2006 at 8:54 am
If I read your question correctly, what you're really asking is a modeling question.
There are tons and tons of material written on datamodeling, and there is no simple answer that...
December 7, 2006 at 4:59 am
Indeed it does. Interesting, haven't noticed that before.
Though it appears that if we replace FLOOR/CEILING with ROUND, it will take care of that. It seems to work the same as the...
December 7, 2006 at 3:27 am
Viewing 15 posts - 376 through 390 (of 1,554 total)