Step Error code: 80040078

  • Hi,

    I've recently added a new cube in analysis manager - 8.00.2039- and it's returning this error in the DTS package that refreshes the cubes.

    Step Error Source: ARM_COMMISSIONS_ACTIVE_WEEKLY_BONUSES_CUBE^Client Code

    Step Error Descriptionrocessing error [Source data contains no rows] 'ARM_COMMISSIONS_ACTIVE_WEEKLY_BONUSES_CUBE^Client Code'S'

    Step Error code: 80040078

    Step Error Help File:

    Step Error Help Context ID:1000440

    I have a single DTS package that runs a series of other packages to load the data into my reporting DB -SS2000 sp4 - the last sub-package refreshes all the cubes i.e. once all the new data has been loaded. The problem i can't work out is when i refresh this cube manually through AM i get no errors. I've tried this remotely from my desktop and directly on the server with no problems.

    The cube is really simple and uses 4 independant dimensions generated from the fact table. Assuming that i've read this error correctly i have checked the fact table and can't find a Client Code 'S' - the client codes are a mixture of 5 digit numeric and alpha numeric codes stored as Char(5) in the source table.

    Any ideas gratefully received.

    K.

     

  • It's been a while bu tthe way I read the error message is that the dimension Client Code of the cube (ARM_Comm....) has no source data.  Have you tried manually reprocessing the dimensions individually?

    Just on a general design point, a little pre-processing of the data to create true dimension tables is often considered a better design.  There are exceptions to this, such as when your fact table is extremely short (ie few rows).  Otherwise, each of your four dimension builds results in a table scan of the fact and then the fact/partition build will also scan the fact.  This is 5 full reads of your (usually) largest table, where the first 4 (normally) result in a very small (comparitively) number fo unique elements/rows returned.  Just a thought.

     

    Steve.

  • Hi,

    Thanks for the reply, in this case the fact table is < 100 rows and the cube processes v.quickly so i'm not too worried - but it is worth bearing in mind

    As for the error; as i said the really odd thing is there is data in the fact table, it will process manually after the package has failed.

    There are other cubes and dimensions that use data from the tables that are the source of this cube and are processed without any problems.

    I'll build the other dimensions i need seperatley and try it that way to see if it makes any difference.

    Thanks

    Chris

  • Have you got processing logging turned on on the AS server?  Possibly it may give you a little more detail than what DTS is capturing.  Have you also checked the event logs to see if there's any app or permissions issues at the time of failure?

     

    Steve.

  • Hi Steve,

    I've turned it on now however checking the log for the SQLServer there is a deadlock notification i'd missed that would correspond to around the time this cube is being refreshed.

    Transaction (Process ID 13) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

    One explanation could be that the view that this cube uses as it's datasource is a union select using the same table twice which i suppose could could cause a deadlock - not very convinced here

    Another explanation could be derived from the following; Meta data for the cubes shows the process start times are all the same, can i assume from this that the cubes are being processed in parallel rather than in series. If this is the case then it would certainly explain why a deadlock is happening as the tables would be used by a number of cubes at the same time. If this is the case i'm more than a little surprised it hasn't happend before. I've set up  1 DTS to reprocess the whole OLAP server -dimensions first then cubes- i guess i could do 1 package to deal with the dimensions and then do the cubes 1 at a time??

    The third explanation is something other than the OLAP server has a lock on 1 of the tables the cube is trying to use.

    I can discount the idea this is happening before the cube is processed as the tables are truncated and then reloaded before the cubes are processed. - If the tables were locked out when the table reloading DTS packages ran then either the package would fail to refresh  - which i would see in the package log - or that the deadlock victim would be the mystery locker in which case the cube wouldn't have a problem. - That would leave something locking the table after it has been reloaded and the only thing that could be is the OLAP server as no-one is using the data at that time of night.

    Please feel free to shoot me down in flames

    k.

     

       

  • I know it's a pain but I would be temtped to run profiler (or tool of choice) when processing to try and catch the exact cuplrits (holder of the lock and deadlock victim).  Knowing this could help in determining the course of action.

    If you truncate that tables, do you reapply/create indexes (are there indexes?)?

    If the cubes (and dims) are all building at the same time, i guess because you're effectively doing a 'select *' from the fatc table, the SQL engine could be giving the first cube/partition a full table lock.  I feel like this is a bit of a straw to clutch at though.  Again, if you can identify exactly which party is which, this should make it easier to determine how to fix it.

     

    Steve.

  • Hi Steve,

    I drop the table indexes, truncate the table, import the data and re-apply the indexes afterwards - i believe that is the correct way to do this; it's certainly the quickest way i found when i first started to build and test this system.

    I've had to drop the cube as i wasn't around over the weekend and couldn't afford to have it messing everything up. I'll put the cube back on tomorrow and run the profiler overnight so should have some results on Wednesday am UK time.

    Thanks for all your ideas.

    K.

     

     

  • Gota

    When i first set up my db i had all the imports running at 11pm to avoid conflicting with our ERM db backups.  The cube refreshes would run at ~1am the following day. So when i designed the querys for this process i needed to select records with dates that would appear as 12/01/2007 when the query was running on the 13/01/2007 necessitating the query to use dateadd(d,-1....).

    As the ERM backups began to take longer due to the volume of data i switched my imports to start at 8pm which meant the cubes were being processed around 10pm... forgetting to change this query. Consequently every night this cube failed to run as no data would match the date criteria. The following morning i would do a manual refresh and of course it would work fine

    The lock problem was actually related to another cube and has been resolved.

    I've now re-written the query which avoids this completley.

    Thanks for all your help.

    K.

     

     

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply