Blog Post

Thank God I Chose SQL Server part I: The Tribulations of a DB2 Trial

,

My last column, Misadventures in TDE: How to Restore an Encrypted SQL Server Database Completely Wrong, described a comedy of errors of my own making. This week’s mistutorial likewise teaches through bad example, by the same heuristic principle that experiencing what can go wrong with a task can sometimes be just as educational as doing it the right way, according to the book. The difference in this case is that the errors were not of my own making and could have led to tragedy in a production environment. In my next two columns I will discuss a series of fatal flaws that wrecked my experience with trial software from some of SQL Server’s competitors, which taught me some valuable lessons. First, I gained a new appreciation for SQL Server and believed more strongly than ever that when I chose years ago to focus mainly on this Relational Database Management System (RDBMS), I had chosen well. I’m certain a DB2 or Oracle guru could point out some things I did wrong in the installations of these trials – but any errors on my part were not self-evident, while the mistakes embedded in the installers were glaring and obvious. This led to a second realization, that everyone who writes or uses software of any kind puts implicit trust in the developers, not just in the obvious sense of trusting that their software will not have security holes, but that it will also be manageable and won’t waste your time or degrade your operating system or network. The DB2 trial sadly failed on both counts and made me question why I should trust IBM with enterprise data, if it could not get the job done with simple test data.

I initially went into my trial of IBM’s DB2 software over the winter with high hopes, since I was eager in particular to try out their version of MDX, the multidimensional data access language that originated with SQL Server. The idea was to write a column later on about the contrasts between the DB2 and SQL Server methods of handling multidimensional data, since that is supposed to be the primary focus of this blog. Sad to say, the trial went so badly that I never even got around to trying InfoSphere Warehouse or IBM version of MDX, because so many other problems cropped up during mundane installation tasks for both the multidimensional tools and for the regular transactional components of DB2 that they depend on. I envisioned including side-by-side code samples of the IBM and SQL Server versions of MDX, for example, but now I have to write a column of an entirely different kind. DB2 and Oracle might install and operate more smoothly on other platforms, such as Linux, which SQL Server is not compatible with; all I can comment on are the shortcomings that were immediately apparent when installing and operating them on my development machine at home, which runs Windows 7 Professional on a six-core AMD processor. The fact that I couldn’t even download the DB2 trial from IBM’s own site on the same computer was an omen of things to come.

How IBM Sold Me on SQL Server

                This column would have been written earlier, had I not been forced to put off downloading the trial for more than a month due to a serious Java applet error on IBM’s website – which made me question whether the Java-based and web components of DB2 would work in a production environment. Businesses typically make their most serious mistakes in the products they’ve sold, not in the sales process itself, so if a big corporate software company can’t even get the software it uses to sell you a product right, it makes one wonder how good the software it sells you will be. Simply identifying how much DB2 software costs was difficult in comparison to SQL Server, even though a search engine or IBM’s own website. Nor could I even download the trial software because of the Java error on IBM’s part, which looked bad. Their tech support page asked for the URL where the problem occurred and left me a small space to describe the problem, which was actually fairly simple to put in words: “When attempting to download DB2 for Windows 64-bit, a window with the title ‘Applet Loading’ pops up. Then my Internet Explorer version 10 browser freezes up. The window never changes or disappears and the download never begins.  I have already tried adding the webpage to my Trusted Sites list but that didn’t work.” In return, I received the following run-around from IBM tech support:

“Dear Steve,

Thank you for contacting IBM. We are in receipt of your e-mail regarding the errors and difficulties you are facing while accessing the IBM Web site. We apologize for any inconvenience that you may have experienced while accessing the IBM Web site. Please note that we have visited the Web site and it is functioning at this time.  Please clear the cache and cookies of your browser and retry accessing the IBM Web site. We have been able to navigate and download the Software from the following URL: https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_us&source=swg-dm-db297trial&S_CMP=web_dw_rt_swd

Please copy and paste the entire URL as a single line into the address bar of your browser. If you continue to encounter errors, please send us another e-mail with the following information:

- links and/or files you are accessing or experiencing problems with

- error messages you receive

- steps to recreate the issue

Thank you for visiting our site and contacting IBM.”

Quite obviously, this support specialist didn’t read a word I wrote and was just dismissing the case as quickly as they possibly could, with the usual trite, useless advice that is the bane of entry-level tech support across the planet: just clear your cookies and everything will be fine. I had already sent them the link I was experiencing trouble with, for a file I couldn’t download, with the steps to recreate an issue that produced no error messages. I didn’t report that the web address was inaccessible, only that the download process didn’t work on my computer, so the fact that an employee could access the URL and download the software themselves had no bearing on the issue at all. The experience made me question whether IBM’s tech support would be just as clueless if I were to put DB2 into a production environment, where hundreds of users might depend on them immediately knowing the right answer to highly technical questions. Clearing your cookies might not cut it then. Downloading patches might not be feasible either if their URLs also feature badly written Java applets. Since my other development machine was down, I fixed the problem the old fashioned way: I went to a relative’s house for Thanksgiving and downloaded the trial software on their computer, which obviously didn’t have the same unidentified compatibility issue between Java and Internet Explorer that I experienced. At that point, downloading the software had gone way down on my list of priorities, so I didn’t mind waiting for the holiday season. Because I had to resort to that workaround for a simple trial, I was already beginning to lean toward the ultimate workaround: permanently sticking with SQL Server instead of switching to DB2, at least in a Windows environment. I began to lean even further in that direction after my trial software was up and running.

Suboptimal Development Studio

                The setup process wasn’t as bad as the cataclysm I experienced with my Oracle trial, which I will discuss in my next column, but it wasn’t anywhere near as smooth as any SQL Server installation I’ve ever done, even when I was complete Microsoft newbie years ago. I installed as much IBM database evaluation software as I could all at once so that the trials would run concurrently and they could interoperate, including DB2;DB2 Recovery Expert for LUW; IBM Data Studio; IBM Database Add-ins for Visual Studio; IBM InfoSphere; IBM InfoSphere Warehouse; IBM Installation Manager; IBM WebSphere and IBM Optim Development Studio. After all of these installation processes were complete, I had the following products listed under Programs and Features when the trial ended and the time to uninstall came: DB2 Advanced Enterprise Server Edition; DB2 Enterprise Server Edition; IBM Database Add-Ins for Visual Studio; DB2 Net Search Extender; IBM Optim; IBM Infosphere; IBM Data Studio; IBM Installation Manager; IDBM DB2 Merge Backup; IBM Websphere Application Server; IBM Infosphere Warehouse; IBM DB2 Recovery Expert and DB2 Accessories Suite 9.7.0.4. Some of these had subordinate components, such as InfoSphere Warehouse, which includes such functionality as DB2 Enterprise Server Edition Intelligent Miner, DB2 Query Patroller, InfoSphere Federation Server Relational Wrappers, InfoSphere Warehouse application server; Administration Console and Workload Manage; SQL Warehousing administration; Cubing Services administration; Intelligent Mining administration; Workload Manager; Unstructured Text Analysis; IBM Data Server Client; WebSphere Application Server; Cubing Services cube server; Mining Blox; InfoSphere Warehouse client; Design Studio; SQL Warehousing (SQW) Tool; Cubing Services modeling; Intelligent Mining tool; Unstructured Text Analysis tools; Mining Blox tools; IBM Data Server client; DB2 Query Patroller Center; Intelligent Miner Visualization; Cubing Services client and Administration Console Command Line Client. I was enthusiastic about exploring all of this functionality, especially the data mining stuff, but I ran into so many problems that my trials expired first. At first I intended to try it all, but began prioritizing by cutting out products that were only tangentially related to data warehousing, like Infosphere Data Architect, the database model and diagram generation program that is roughing equivalent to Visio. Eventually, I had to abandon most of the remainder.

On the first day, however, I focused primarily on getting some of my SQL Server databases ported over to DB2 so I could play with some familiar data, including some warehouses I built. I tried to accomplish this through IBM Data Studio, which combines the three GUI tools that were previously needed to access DB2, Optim Development Studio, Optim Database Administrator and Data Studio Health Monitor. Being a Visual Basic developer since the days of VB 5.0, I had little interest in the Development Studio functionality, which is roughly equivalent to Visual Studio but dedicated to Java programming. The Database Administrator functionality is designed to be essentially the same was what SQL Server Management Studio provides, except that it lacks many of the bells and whistles. Worse yet, many of the bells and whistles it does have are off-key, in the sense that it doesn’t work as well as in SSMS. For example, running scripts in Data Studio was a bit clunky; the results pane was detached from the script pane, making it harder to organize and distinguish the results of different scripts when multiple windows were open. Connection details which I didn’t need to be conscious of (such as the host name, etc.) stayed visible at the top of the script pane.  Notification that a script succeeded took up an unnecessary amount of valuable white space and crowded the results of the queries down to an even narrower section of the screen. All in all, every aspect of the user interface felt like it what it was: a tool written in Java and adapted to Windows, rather than one written specifically with Windows users in mind. Data Studio was definite disappointment in the sense that I expected to be able to get tasks finished as quickly as in SSMS, but that wasn’t the case; every aspect of the user interface was inferior to SSMS to such an extent that it slowed down my workflow from the beginning of the trial to the end. Naturally, there are probably ways to optimize the development environment that I didn’t stumble across in my sloppy examination, but the point is that they weren’t intuitive or easily found. In that sense, Optim Development Studio and Optim Database Administrator aren’t exactly optimal. Perhaps the functionality exists, but I never found any reference and typing aids corresponding to Intellisense and worse yet, no way to directly edit data by typing into table query results. The latter seemed like a minor detail until I discovered just how difficult it would be to port my existing practice data to DB2; for a while, I thought I would end up having to type some of it in by hand, just for something to practice DB2’s SQL syntax on.

The next problem I ran into was in creating and populating my first practice database. The CREATE DATABASE statement in DB2 is not that much different from in SQL Server, but incorrect syntax isn’t what gave me an arcane DB2_CREATE_DB_ON_PATH error that had to be researched online, on my first attempt at a routine task. After some digging, I discovered that this is also IBM’s fault: in order to create a database on anything except the root of a drive letter, you must either set DB2_CREATE_DB_ON_PATH = YES in the registry at HKEY_LOCAL_MACHINE\SOFTWARE\IBM\DB2\InstalledCopies\MyDB2InstallatioName\PROFILES\then the server’s name or stop the server and type db2set DB2_CREATE_DB_ON_PATH=YES at a command prompt, then start the server.[1]

After this I was able to create some schemas and tables without much incident, until I came to the next logical step, importing my practice data into the new DB2 database from SQL Server. What I experienced next made me resolve to never swear at the SQL Server Import and Export Wizard ever again, even in the event of one of those irritating data type conversion errors that drive us all crazy. I was shocked to discover that importing my data would be nearly impossible, since the DB2 equivalent of that tool seemed to be limited to just three types of source data, ASCII, Worksheet format (WSF) and Integrated Exchange Format (IXF). The only one of those I was familiar with was ASCII, but I was not going to be reduced to importing simple text files just yet. Perhaps DB2’s utility couldn’t import anything but text, but SQL Server’s Import and Export Wizard might be able to export it in the other direction, since it has a DB2 driver. I was able to connect to the DB2 server from SQL Server, but unfortunately couldn’t export the data directly thanks to an error with the following message, “cannot get the supported data types from the database connection.” So I swallowed my pride and simply exported the data to a .txt file from SQL Server, thinking that importing it into DB2 would at least be fairly straightforward. Once again I realized my expectations were too high. My Import statements looked something like this: IMPORT FROM “C:\ThePathOfMyData.txt” OF DEL  MODIFIED BY COLDEL0x09  METHOD P (1, 2, 3, 4)   MESSAGES ON SERVER  INSERT INTO “MySchemeName”.”MyTableName” (ID, “ColumnName1”, “ColumnName2”, “ColumnName3”). They validated correctly, but failed with the most useful error message I have ever seen: null:null. To make matters worse, retyping a simple Import command with Validate Database Object References option checked made the caret spin and blink for several seconds each time I did any editing at all. As with some of my other scripts, Data Studio sometimes underlined the text with red squiggles, which made me suspect a syntax or other error, but hovering over the underlined text provided no messages of the kind you would expect with Intellisense. Everything seemed to be missing or broken right out of the box in Data Studio: there was no Intellisense; right off the bat I was forced to research online how to edit the registry just to create my first database; worse yet, the routine task of importing data ended in an error with no message at all; the user interface was not designed with efficient workflows in mind; what I found most laughable (or perhaps sad) was the reliance of a disproportionate share of the IBM software on the ancient technique of command prompts in both the setup programs and the software they installed. Until my DB2 and Oracle trials over the winter, I hadn’t seen this many command windows since Bill Clinton and Slobodan Milosevic were still household names.  DB2 didn’t even have a modern import/export tool, nor could I enter data directly into tables, so basically I was up the creek without a paddle. Perhaps I couldn’t import my usual practice data in any meaningful time frame, but I thought I could still learn some of DB2’s SQL and MDX syntax by using IBM’s sample data.

GSDB or Misadventureworks? The Fictional Company with a Plot Twist

                Just as Data Studio doesn’t hold a candle to SSMS, so too does IBM’s sample OLTP database, GSDB, fail to measure up to the Microsoft equivalent, AdventureWorks. To be fair, until SQL Server 2008 R2 installing AdventureWorks could sometimes be an adventure in itself; in one release (perhaps 2005 or 2008) there was an uproar in some Web forums about some really bone-headed mistakes Microsoft made in its installers, which went uncorrected for a long time. I remember editing the installation scripts for SQL Server 2008 by hand to get them to work (although I can’t remember if that was part of the same incident or not) but in R2, the setup process for the samples was much easier. GSDB has a lot in common with AdventureWorks, in that they both provide practice data for fictional companies selling sporting goods, but they differ in that the install process was equally fictional, with one of those surprise endings that leaves readers scratching their heads.

The first problem I encountered may have been due to my own misreading of a tutorial on IBM’s website for Setting up the GSDB Sample Database. I felt the following might have been written in an imprecise way, but that is a matter of aesthetics I suppose:

This product includes a sample database called GSDB. The GSDB database contains sales data for a fictional company that sells outdoor equipment.”

“The sample database is contained in a compressed file. To set up the sample database, you must expand the file, customize a configuration file, and run a script. Alternatively, if the sample database is installed remotely, you can connect to it from your local computer.”

“Before you begin: Make sure that the following conditions are met: You installed the compressed file for the sample database. This file is installed when you select the Samples option in the installation program. The script for Windows is in the samples/GSDB/win directory. The script for UNIX operating systems is in the samples/GSDB/unix directory.”[2]

After wasting some more time, I discovered that the correct directory was actually C:\Program Files\IBM\ISWarehouse\samples\GSDB. I was confused by the wording of the tutorial, which actually meant that the samples\GSDB\win path is created after you open the compressed file I found at that location. I wrote off that lost time as a mutual misunderstanding at worst. Then I navigated to the folder and found the setupGSDB.bat file, which of course had to be opened with yet another command prompt. My next instinct was to scream, “A command prompt installer? In this day and age?” But I suppressed the urge, drawing upon the reserve of restraint I’ve had to acquire after seeing countless rave reviews the last couple of years about everything that can be done in PowerShell, without any recognition that all of it can be done much more efficiently in SMO. To each his own, I suppose; but it’s harder to make a case for doing anything in DOS-era command prompts, which aren’t even in the same league as PowerShell. My worst fears were confirmed: the installer was not merely started by a command prompt, but required the user to type perfectly, by hand, the usual unintelligible DOS-style commands that come with such programs. Some IT professionals have a preference for such programs even when GUI tools are available, but there is no question that they are horribly inefficient in terms of data entry. In this case, I had to waste about five minutes typing in such installation parameters as my Windows ID and password, which I normally could have done in five seconds. In the meantime, I watched, waited and wondered why IBM wouldn’t simply allow us to download the database and attach it.

This throwback to the old ways had another drawback than became painfully apparent, once it was too late to do anything about it. The GSDB setup program reported in the command prompt window that it was performing such steps as creating the database and the tables, then moved on to “Loading Data” and got stuck there for five minutes. Then it suddenly spit out another insanely useless error message: “Error loading tables – Quitting (4).” Unlike in a modern Windows Installer program, I could not interact with the setup program, which simply quit, nor could I check any error message or logs to see what went wrong. Many installers written today in Visual Studio, including the setup programs for SQL Server, will reverse a failed installation and may even give you a file-by-file, blow-by-blow account of what’s being added to and removed from your machine. With setupGSDB.bat, I never knew what files were installed, but I strongly suspected that whatever they were, they were never uninstalled at all, let alone in a modern, professional manner. I can’t remember how many attempts I made at re-running the installer, but if I did manage to create the GSDB database, I never did succeed in getting it populated with data. By now I was simply dumbstruck and decided to get to the point of the entire trial by moving directly on to the data warehouse components of DB2, but I was skeptical that they would work any better. MDX and multidimensional data operates at a higher level of abstraction than OLTP systems, which sometimes serve as their foundation – but if IBM couldn’t get the OLTP stuff right, I was skeptical that their data warehousing tools would be any match for Analysis Services. They might call those tools Infosphere, but if they couldn’t even manage their own information – handling critical errors with messages like “null:null” and  “Error loading tables – Quitting (4)” don’t exactly inspire confidence – then I wouldn’t be trusting them with my Info.

Lack of Info Sphere

                Analysis Services beat Infosphere hands down in the only benchmark that counted in my trial: installation time. Before the trial I assumed I might have chance to see just how fast DB2 could perform OLTP and OLAP operations, but it never performed any; it might have taken me an hour or two to get an SSAS server up and running, but it took days before I finally gave up on Infosphere, in which time my SSAS server could have performed billions of calculations. I don’t have much experience in setting up benchmark tests and was looking forward to getting a little practice, but once again installation and manageability issues got in the way. Factors like these really can’t be overlooked in a contest between any database servers, for the same reason that you can’t have a fair race between two sprinters if one of them is stuck back at the starting line, trying to tie their shoes. I would not doubt that there are Infosphere gurus out there who can push their systems as fast as SSAS, but I can say for certain that Microsoft has trounced IBM in the manageability category on their OLAP systems, just as they have in OLTP.

At this point I expected a parade of errors, which commenced as soon as I tried to log in to Infosphere Warehouse for the first time. According to IBM’s documentation, I should have been able to connect through http://localhost:9080/ibm/warehouse or by substituting the name of my computer for localhost, but it didn’t work. I am a networking dunce, so I was quite proud of the sneaky workaround I devised, by copying the address for DB2 Recovery Expert in my Internet Explorer Favorites, which used my IP address. I edited that so it resembled this:  http://192.168.254.19080/ibm/warehouse, which redirected me to http://192.168.254.1:9080/ibm/login/login.jsp. I at last had the right page, but couldn’t log in under my DB2Admin credentials. Luckily, my Windows user name and password did the trick.

The next problem came in the Infosphere Warehouse Administration Console, where I immediately received the following message: “DWE72000E – Error with Loading Service – Ensure that the WebSphere Application server is running and that you have a connection to the database.” What made this problem frustrating was that it was complicated by a user interface mistake, which left me unable to click the OK button to get out of the modal dialog. When I clicked the button, nothing would happen. Eventually, I was fortunate to spot the right edge of a second, almost completely hidden error box jutting out of the no man’s land at the top left end of my screen. After dragging it to the middle of the screen where it should have been, I was able to read the messages saying there was a “Grid Error” plus a non-specific HTTP Request Error and an “Error with Loading Service.” Once they were in the center of my screen where they belonged, I could click the OK buttons and escape from the whole mess.

The next logical course of action seemed to be to click Start the Server under the Websphere Application Server entry under the Start menu, which immediately sent a shiver up my spine when it opened another command prompt. Yet another common drawback to these ancient DOS methods from 1995 immediately manifested itself, as the window simply spit out a mass of text I couldn’t read then disappeared. I ran it a couple of times and was able to pick out the words Access Denied. So I decided to start it through Services under Microsoft Management Console, which told me it was already started, contrary to what Infosphere reported. Not being one to look a gift horse in the mouth, I went back to Infosphere and started perusing the controls, looking for a way to create and populate my first warehouse. That would require a connection of some kind, of course, but the Manage Connections tab was cumbersome to use. It makes the user look up the host name, port number and other information for a particular database, some of which is hidden under the hood in Analysis Services; as a result, it took far too long to enter the data for a simple connection string. When I finally got the connection information typed in correctly, I received a message saying that my connection to the Administration Console itself had timed out, forcing me to log back in, then type everything over from scratch.

The Manage Logs tab pointed me to the DS_System.log, which actually had some information in it, such as several references to this error message: “User ID revoked: The system cannot obtain an SQ connection for metadata database for the following reason” code -4214 SQLState 28000.” The Configuration tab listed no services for me to configure, while the SQL Warehousing and Cubing Services tabs were completely blank, without controls or text of any kind. The Mining tab had options to Manage Mining Models and Manage Cached Mining Models, as well as to Enable Database, although no databases were accompanied in the provided list. Much of this was of course not out of the ordinary, since I hadn’t yet added any warehouses, let alone mining models. Until I could create one, there wasn’t much I could do. Since all of the installation processes associated with DB2 seemed so prone to error, I decided to find a tutorial and follow it to the letter, promising myself that I wouldn’t swerve an inch from the directions to eliminate the possibility of personal error. Once again, however, IBM’s own directions were either outright erroneous or inapplicable to the real-world situation I found myself in. The tutorial I found directed me to “Open the Design Studio:? Click Start > Programs > IBM InfoSphere Warehouse > ISWCOPY01 > Design Studio. ISWCOPY01 represents the name of the instance that you specified when you installed InfoSphere™ Warehouse.” Unfortunately, Design Studio wasn’t listed where IBM said it would be. Instead, I navigated to Start > Programs > IBM InfoSphere > and started Data Architect. The next direction in the tutorial was to “Open the New Project wizard: ?In the Design Studio, click File > New > Data Warehousing Project.” Of course, I was in Data Architect rather than Design Studio, so that may have accounted for why I couldn’t find an option there to create a new Data Warehousing Project. I searched Data Studio and Optim Database Administrator in vain for the same thing. At this point, I called it quits, with my 15-day trial of Optim Development Studio and Optim Database Administrator about to expire – nonsensically, before the longer trial periods of some of the other DB2 software that required Optim or Data Studio to access them. Even on its way out, DB2 reported incorrectly that I had zero days left in my Optim trial, when in I had another day before it actually expired.

The Database Software of the Undead

                Yet like a zombie, DB2 never really expired; part of it is still on my machine and it just won’t die. I’m tempted to use the brute force method of simply deleting the remaining files by hand in Explorer, but am hesitant, because DB2 has already come back to haunt me once after it was uninstalled, kind of like Michael Meyers at the end of a Halloween movie.

I had reached a dead end with the problem of creating  a new warehouse, which I probably could have fixed if I had not also reached my wit’s end at the same time as the expiration notice. After all this, I just wanted every trace of it off my main development machine at home, which is all I had at the time, with my backup being down for the count. I went to Programs and Features and began uninstalling all of the IBM software, some of which disappeared without a peep. Some of it wouldn’t uninstall without a little coaxing, like Infosphere Warehouse, which reported that “The Websphere Application Server profile that was installed with the product is running. You must stop his profile before you continue.” I tried to start and stop Websphere through the Start menu and got the Access Denied message in the command prompt windows both times, I guess for old time’s sake. So I stopped Websphere in Services, tried to uninstall Infosphere again and got the same message, that Websphere was still running. I finally got out of this logjam by running uninstall.exe from the Websphere/AppServer/uninstall folder, which informed me that two java.exe processes were running and interfering with uninstall, so I quit them in Task Manager. After that, DB2 finally began to die – or so it seemed.

To this day, DB2 Advanced Enterprise Server Edition is still listed under my Programs and Features, which reports that it still occupies 1.3 gigs of disk space. Together, the IBM folders under Program Files and Program Files (x86) still take up about a gig. I tried to uninstall it all repeatedly, but starting the process through Programs and Features always initiates a Windows Installer (msiexec) package, which immediately quits in Task Manager. Whenever I start the uninstall process, a message in the Windows Application Log reports that “Product: DB2 Advanced Enterprise Server Edition — Removal failed.” Really? The Application Log entry is usually accompanied by another message a few minutes earlier, “Windows Installer removed the product. Product Name: DB2 Advanced Enterprise Server Edition. Product Version: 9.7.400.501. Product Language: 1033. Manufacturer: IBM. Removal success or error status: 1603.” I assume that 1603 is an error code, but the first sentence seems to say it was removed, which is in turn contradicted by the Removal failed message. Once again, none of this gives a clue as to why it failed.

I would have deleted the files manually by now, except I now wonder what unforeseen side effects that might have. The whole caper turned from a simple waste of time into an ordeal only after I had uninstalled most of DB2. Several days after I finished uninstalling everything except DB2 Advanced Enterprise Server Edition, I suddenly suffered catastrophic errors in Visual Studio. One of my Dataset Designer files began disappearing intermittently, forcing me to restore them from backup. This was followed by unresolved reference errors on every user class and data object in one of my projects, which I noticed occurred shortly after  System.IO.FileNotFoundException errors in Visual Studio. In the fine print I found a mysterious culprit: “Could not load file or assembly ‘IBM.Data.DB2, Version=9.7.4.4, Culture=neutral” etc. Later, I received a related error message in Visual Studio: “Custom tool error: An unexpected error occurred in the IBM DB2 and IDS Data Provider for .NET Framework.  Please contact the provider vendor to resolve this problem.” I traced it all back to IBM Database Add-Ins for Visual Studio, which I hadn’t yet uninstalled along with the rest of the DB2 software. Running the uninstaller reported that “the operation could not be completed. the requested operation requires elevation,” yet it apparently succeeded, since it disappeared from Programs and Features and Visual Studio started working again. The particular project I received catastrophic errors on accessed a SQL Server back end, but in no way, shape or form referenced anything remotely related to DB2. Other projects I edited at the same time also accessed the same SQL Server instance, but did not suffer catastrophic reference errors. If you’ve ever had to edit Visual Studio datasets and code behind by hand in order to fix corrupt references, you’ll understand the terror that hundreds of sudden, unexplained reference errors can strike into the heart of a programmer.

I suppose that if I had been able to successfully install and manage DB2 with the same ease as SQL Server, I could report more good features about it. That was originally my plan, to spend a few hours trying out their versions of SQL and MDX and to post here on any contrasts in the ways they handle multidimensional data; after all, anyone who appreciates MDX enough to include in their own OLAP software can’t be half bad. I did see a few things I liked, such as sequences, which IBM apparently implemented before Microsoft included them in SQL Server 2012 Release Candidate 0. Tablespaces seem to be a good way of conceptually organizing catalogs, users, temporary space and the like. I think I liked the DB2 Memory Visualizer more than any other component I tried. There are probably many more good points that I didn’t get a chance to try, but that’s the point – I wasn’t able to evaluate all of the strengths of IBM because they got a lot of the basics wrong. As I pointed out before, all the OLTP and OLAP benchmarks in the world are useless if you can’t get the software running, which I suspect indicates that it also can’t be trusted to stay running in a production environment. I would not want to be responsible for a DB2 database on the job, knowing that it would take me far longer to troubleshoot it in an emergency. This is more than just a matter of personal preference, or of having more experience with SQL Server; there are serious problems with DB2 that I never expected to find, which go much deeper than my own inexperience.

What I did gain is a new appreciation for SQL Server. I like to complain about its jumpy version of Intellisense, the lack of axis support for multidimensional data in the GUI and the usual data type errors that come with the Import and Export Wizard, but that’s kid stuff. The grass isn’t always greener on the other side; in fact, in some places it’s brown. I’m sure a DB2 wizard could teach me many things about a lot of cool features I’m missing out on, but the problem is that it will take longer to teach them and to put them into practice than it ought to because of DB2’s manageability problems. Highly experienced users of DB2 and SQL Server can make all kind of valid points about the strength and weaknesses of both programs, but the point is that an inexperienced user like me can’t find out how to address problems with DB2 as quickly as they can with SQL Server. Installation, usability and manageability are apparently problems that DB2 has in common with Oracle, judging from similar problems I had with a contemporaneous trial of their software. At least I was able to evaluate parts of IBM’s software, but the same can’t said for Oracle, which gave me even more insoluble errors much earlier in the installation process. Both companies have apparently deteriorated in this area, because I was able to install and use much earlier versions of DB2 and Oracle on Windows machines almost a decade ago. Judging from that experience, I never expected to be writing this week’s mistutorial, or next week’s: Thank God I Chose SQL Server part II: How to Improperly Install Oracle 11g.


[1] I found this solution at this page on DB2Rookie’s WordPress Blog. See the post by the user named szando. .

[2] This is a verbatim quote, but I adjusted the headings and line breaks from the original website to fit into the format I usually use for direct quotations.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating