Object naming

  • This appears to be a purely academic debate. Standards are established within an organization to maintain some type of order.  There is no reason that standards can not change or new ones added given careful thought.  But the argument that one can tell what an object is by its call is pure nonsense.  While true the problem remains that one must have access to the calling process in order to determine the type of object.  (With out referencing the sysobjects table.)  What if there is a personnel turn over or the code is locked down or otherwise access restricted.  Why make someone’s job harder.

     

    My suggestion is to stick to your guns but be a little flexible.  Most shops I have worked in have had a naming convention much like you have identified. 

     

    Good Luck

     

  • Standard to ease your work. by maintaining it, no one can drag you to their own standards. In addtions, Standard are always within your organzation. there is no global set of rules for standards!




    My Blog: http://dineshasanka.spaces.live.com/

  • well - this article addresses only stored procedures naming conventions but the posts in response should give you a good indication of where different people draw the line between "acceptable" & "beyond acceptable"!







    **ASCII stupid question, get a stupid ANSI !!!**

  • I for one can not understand why people want to differentiate between tables and views using prefixes. They are not different! For instance, consider this common scenario:

    You have a table FOOBAR (a, b, c). There are lots of applications using SELECT a, b, c FROM FOOBAR. Sometime later you need to split FOOBAR into FOO and BAR. You would now get FOO (a, b) and BAR (a, c). To avoid having to modify all applications you can simply add a view FOOBAR that joins FOO and BAR to return the (a, b, c) set as the old table did. The applications have no idea that there is now really two tables.

    If FOOBAR would be called tblFOOBAR you would now have a view prefixed with tbl, or you would need to modify your apps.

    Prefixes are a thing of the past, a mistake caused by lack of support from tools and unclear thinking.

  • Hungarian Notation was invented because C had no type safety, so programmers had to keep track of data types themselves.

    I personally think it is unnecessary and ugly, it isn't so hard to figure out what data type something is what with modern IDEs (or the INFORMATION_SCHEMA in SQL). And sometimes it doesn't matter which type something is (as in Chris Hedgate's example). And sometimes you change a data type and have to either replace hundreds or thousands of references to that variable (or whatever it is), or have the wrong notation prefix everywhere.

    But, different strokes for different folks... If I start on a pre-existing project (or work with a pre-existing group) I follow whatever convention is already in use. It isn't all THAT important.

    -- Stephen Cook

  • Hungarian Notation was invented because C had no type safety, so programmers had to keep track of data types themselves.

    Actually, Hungarian Notation as it is known was 'invented' by a documentation writer that did not understand what the inventor really meant. See Joel Spolsky's Making Wrong Code Look Wrong for more info. Scroll way down towards the end if you do not wish to read the entire article. It is very good though so I highly recommend it.

Viewing 6 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply