Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Dropping a Row Expand / Collapse
Author
Message
Posted Monday, May 9, 2011 9:22 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 12:57 PM
Points: 33,206, Visits: 15,361
Comments posted to this topic are about the item Dropping a Row






Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1105829
Posted Tuesday, May 10, 2011 6:20 AM
Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: Today @ 8:37 AM
Points: 751, Visits: 1,917
It falls into the 'right tool for the right job'

Financials, order entry, etc need to be exact. By comparison many of the largist data set operations (such as business/trend analysis) can easily afford to loose a small amount of data (often the noise in the original data can be larger than such data loss).

If you use a highly structured data system for non rigid data, you probably are using too many resources.


...

-- FORTRAN manual for Xerox Computers --
Post #1106036
Posted Tuesday, May 10, 2011 6:35 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, August 27, 2014 8:01 AM
Points: 1,419, Visits: 2,085
I understand your point of view about sometimes a row might be missed. If I compare a nuclear plant for instance with a web forum, I understand that one's highly critical and will surely impact on humans lives while the other is far from it and almost no one will notice.

However from a customer perspective, if a company, whatever the service I'm paying or using for "lose" a row from an order or everything I asked them to, from my customer eyes it's a flaw and I'll be more eager to change company / service. I will see that as "I'm unimportant" for them as they didn't took care of my requests like they should.

Although utopia, to my eyes, no information should be lost "by design".

This is two ways of life fighting each other, IT and the technical limitations and expectations from humans not knowing IT stuff (for most people).

This is a debatable subject.
Just my two cents.
Post #1106043
Posted Tuesday, May 10, 2011 6:59 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, August 26, 2014 1:19 PM
Points: 471, Visits: 841
When it comes to cost I have been amazed at how much risk management teams are willing to accept. It has always appalled me to see poor backup plans, lack of dr and no high availability in scenarios where minor improvements and expenditures would significantly reduce risks, yet management says no, regardless of how well it is explained.

I would bet there are quite a few places where the decision would be made to have an acceptable threshold of loss due to the cost to prevent it.
Post #1106059
Posted Tuesday, May 10, 2011 7:14 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, August 4, 2014 8:10 AM
Points: 1,635, Visits: 1,972
I'm not quite sure I agree that if Facebook lost 1 out of 1,000 posts that no one would care. I do agree that with Google most people wouldn't care if they got slightly different search results with the same terms. It's the difference between instantaneous and persisted data. Facebook posts are essentially persisted data. They get put in and are there for ever. Google searches are more transitory. With Google continually crawling the web if a page gets missed once it'll get hit and put in again later so it's not a big deal. Neither of these are critical but it's the nature of how the data is generated and used that makes the difference.
Post #1106078
Posted Tuesday, May 10, 2011 7:24 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, July 26, 2013 12:06 PM
Points: 7, Visits: 41
A side note on the subject of getting different results from Google: that is the norm. See http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html
Post #1106089
Posted Tuesday, May 10, 2011 8:18 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Thursday, July 28, 2011 8:03 AM
Points: 70, Visits: 316
jay holovacs (5/10/2011)
It falls into the 'right tool for the right job'

Financials, order entry, etc need to be exact. By comparison many of the largist data set operations (such as business/trend analysis) can easily afford to loose a small amount of data (often the noise in the original data can be larger than such data loss).

If you use a highly structured data system for non rigid data, you probably are using too many resources.


I think your point about noise is a key consideration. None of the people entering or interpreting the data are perfect and incorrect data is often more damaging than missing data.
Post #1106158
Posted Tuesday, May 10, 2011 8:21 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Today @ 1:50 PM
Points: 1,652, Visits: 4,713
cfradenburg (5/10/2011)
I'm not quite sure I agree that if Facebook lost 1 out of 1,000 posts that no one would care. I do agree that with Google most people wouldn't care if they got slightly different search results with the same terms. It's the difference between instantaneous and persisted data. Facebook posts are essentially persisted data. They get put in and are there for ever. Google searches are more transitory. With Google continually crawling the web if a page gets missed once it'll get hit and put in again later so it's not a big deal. Neither of these are critical but it's the nature of how the data is generated and used that makes the difference.

I'm not sure how many users monitor their guest book or blog posts close enough on a daily basis to notice if one (out of a couple hundred) entries from months back suddenly disappeared. I'm sure somebody eventually would, and they'd be really verklempt about it. However, there generally isn't something like a Service Level Agreement between a social media company and their users. Even if the issue were brought to the company's attention, I doubt they would respond by assigning a DBA with the task of digging though backups or transaction logs to locate the missing data.
On the other hand, if a bank were dropping transactions, within a few hours customers would start calling in with complaints about non-posted paychecks or missing daily deposits. It would become news really fast, and the bank would be required by law to fix it.
Regarding where NoSQL databases can properly fit in a corporate enterprise envrionment, there is a lot of non-transactional stuff like documents, images, reference data, and entity-attribute-value records that could be better offloaded from the RDMS into NoSQL. I could see the merits of a blended architecture, even in an organization like a bank.
Post #1106163
Posted Tuesday, May 10, 2011 9:00 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, August 4, 2014 8:10 AM
Points: 1,635, Visits: 1,972
Eric M Russell (5/10/2011)
I'm not sure how many users monitor their guest book or blog posts close enough on a daily basis to notice if one (out of a couple hundred) entries from months back suddenly disappeared. I'm sure somebody eventually would, and they'd be really verklempt about it.


If it was an old one then chances are very slim that anyone would notice. I was coming from the perspective that it was the write failing meaning it is a new post instead of an old one. If it's a read that fails and it shows up after a refresh no one is going to care if it's Facebook or a blog. Well, no one should care. If it were a medical record, one bad read can have very, very bad consequences even if the data shows up on a refresh.
Post #1106205
Posted Tuesday, May 10, 2011 9:04 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 12:57 PM
Points: 33,206, Visits: 15,361
Donald Bustell (5/10/2011)
A side note on the subject of getting different results from Google: that is the norm. See http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html


Saw that after I wrote this and was very interested. A failing of algorithmic learning.

Also: made your link hot







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1106209
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse