Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Data Vision Expand / Collapse
Author
Message
Posted Wednesday, June 12, 2013 8:55 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Yesterday @ 2:46 PM
Points: 470, Visits: 769
Steve Jones - SSC Editor (6/12/2013)
lshanahan (6/12/2013)
We all make mistakes, and may never catch all our errors in testing, but we damn sure better catch our stupidity. Not checking the input data before processing it is just stupid. We all know it can lead to all kinds of problems, not just incorrect data.


Agreed, though I was going in a slightly different direction. I was also thinking of situations where statistical models and predictive analytics may help catch not only errors (and the stupid stuff) but also alert to potential issues that are very hard to monitor which don't involve errors per se so action can be taken ahead of time to minimize or even completely avoid problems.


I agree with that, but I'm also thinking that we might find that tools can learn to suggest better options as well. Such as looking at new features in the current version when editing an older package. Things like noting the volume of data through an SCD task is too large and it's slower than using something else.


Now that is intriguing, because I don't see humans being suited to that task, at least not very efficiently.

I solved a performance/stability issue once by searching source code for instances where we made calls to C++ memory allocation functions. It would be somewhat trivial to produce a tool to handle this, or reconfigure the compiler to look for times when we allocated but did not delete.

Using a tool to do those things which we aren't very efficient at is a great use of resources.


Dave
Post #1462681
Posted Thursday, June 13, 2013 9:29 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Today @ 10:53 AM
Points: 2,268, Visits: 1,326
djackson 22568 (6/12/2013)
[quote]Now that is intriguing, because I don't see humans being suited to that task, at least not very efficiently.

I solved a performance/stability issue once by searching source code for instances where we made calls to C++ memory allocation functions. It would be somewhat trivial to produce a tool to handle this, or reconfigure the compiler to look for times when we allocated but did not delete.

Using a tool to do those things which we aren't very efficient at is a great use of resources.


Hi Dave, will not argue with you on your current train of thought but might add something. Not only should we control the memory leakages created by use leave and go on strategy but we should also have monitoring on the number of calls made to certain routines. Years back CICS use to track this in the IBM environs and we could see what was happening even on the system subroutine levels. With those in hand we could determine if the system was actually reusing the allocated mapped instance or was for some reason not sharing an instance but was always creating a new one. This left us at time with memory blocks all over the place and a massive cleanup effort going on regularly on the OS level. When we were able to identify those routines we determined if they were really necessary or if we were abusing a routine unnecessarily or ff it was just popular. If it was popular we could load it up early in the CICS region and make it stay resident, and almost force programs to use the one instance. Saved a lot of execution time.

M....


Not all gray hairs are Dinosaurs!
Post #1463157
Posted Thursday, June 13, 2013 9:38 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Today @ 10:53 AM
Points: 2,268, Visits: 1,326
Steve, like you I find the visualizations of data very helpful and that they are very excellent tools for data quality validation and checking as well. Those charts and maps with odd spikes or locations mapped way out in the water are very obvious if you take the time to visualize, chart, or map your data.

Using visual presentations of data if they can be prepared quickly can be a nice tool in determining trends in the data. This is not news and I know that millions have stated this already but it still remains true. If we will take the time to do the preliminary analysis even after the data is captured or uploaded, we can identify some oddities or trends we might want to investigate. In doing so we might find things like constant monitoring devices calibrated by a new employee were not set right before they went into the field and the data is a couple of decimal points to the right or the left. And the neat thing is that if we use the visualizations in a dashboard we will see quickly any variations in the data we monitor.

I have built OLAP cubes over datasets just to see what is going on, and have been rewarded with information and interesting facts concerning the data collections by doing so.

Loved the post!

M.


Not all gray hairs are Dinosaurs!
Post #1463163
Posted Thursday, June 13, 2013 10:01 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 1:35 PM
Points: 33,099, Visits: 15,207
Visualizations are a great place to start, but not sure they're often good for decisions. Better to dig in further to the data itself and verify with the details. however they do get us to think about data in different ways.






Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1463185
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse