From Manager to Developer – A Necessary Diversion

About three years ago, I became re-enamored with software test automation. Oh, I had been using various packaged tool sets through the years with moderate success and barely acceptable support. But with the gradual adoption of Agile development, the open source community responded with superior tools and, better yet, superior community support.

We needed automation in order to take our QA team from good to great. We were wasting so much time and energy on regression testing, that in some cases, management was choosing to forgo regression in favor of a deadline.  Any software development professional knows that regression is where the really bad bugs are. Modifying a top-level class and then skipping regression testing is tantamount to assuming that your parachute will deploy properly without checking the pack…since it worked last time. Anyway…I digress.

I dove in as deep as I could, but soon found out that engaging in what is no less than a full-time development effort, building an automated test framework could not be done while attending three to five meetings per day, maintaining personnel and filling out performance reviews. There was but one thing to do. I asked our CIO to  allow me to work on the test automation framework full-time. This was a risky decision that required me to give up a fairly cushy management job. But I have never been one to back down from what I think needs to be done and what is best for the organization.

Without much hesitation, I selected Ruby, Watir and Rasta to build our our infrastructure. Details about that architecture can be found here in my blog. So on down the road I went, coding and enjoying life outside of the conference room. Within months, the first part of the framework was complete. We could run thousands of regression tests on our Web applications with the touch of a button (and not even that once I configured it to run inside of Hudson).

Then something strange happened at work. A large group of our Java developers, including the manager, left in a matter of a few months. It was nothing that we were doing or were not doing as a company. Each of them had a great opportunity and had to go. A few months before this happened, I had indicated that maybe I could move over to the development group to finish building out my test automation framework, act as the “build guy” since I was now the resident Hudson expert, and work my way into a Java development role. After the exodus, I was officially asked to join the development team as a Java developer. I would not be working on my test framework or anything else I was familiar with. No, I would be working on our main-line products. I would need to learn not only much more Java than I currently knew (specifically enterprise-specific features), but also JSP / Struts, JSF, C#, Hibernate and other packages that our multiple systems require. Was I getting lucky or was I getting set up to crash and burn? My confidence was suspect, but I took the role because that is where my company needed me and they had faith in my skills and ability to learn.

But the whole time I am thinking, “…but I am supposed to be a “manager” right? I mean, that is where I have the most experience and training. Is this the right thing for me to do right now? How will this affect my “leadership” prospects?”. After thinking through this decision, I realized I would be a fool to pass up such an opportunity.

First off, I’ve got the management deal down. Between my experience, degrees and certifications, and moreover the fact that I have successfully managed bands of insane, flaky and usually addicted  musicians and software testers for years, I am more than confident in the ability to lead all sorts of teams. But there was always this…”what do you REALLY know about process of building and writing the code you or your team is  testing?”. While I have written many lines of Perl, Ruby, SQL, batch files and other utilities to make life easier for me as a tester, I had never been a part of the “creative” side of software development. I had never had a bug written on me! That’s like being an art critic without ever picking up a brush or a music critic who can’t play a D chord on a six-string  (and since most of them don’t, I don’t trust them as far as I can throw them).

So here I am three months after I made the jump. It has it been a very enlightening experience, to say the least. Besides knocking my ego down about ten notches, I go home with a sore brain. But it feels good…like I actually created or fixed something instead of just whining about it. Besides doing my music thing and hanging with the fam on the weekends, I love to work on my car and on projects to fix up the house. I crave that feeling of getting something tangible completed whether it is a shelf in the laundry room or tiling a floor. I get that same satisfaction after fixing a bug or adding a new feature to one of the many products we support.  The learning curve has been steep, but I can now hammer nails. Soon, I will be able to set four-by-fours in concrete, tape and float dry-wall and put in new bathtub fixtures. Growth is good.

So to wrap up, I would recommend to all of those who are in software quality assurance: give yourself a chance to be a developer. I don’t mean writing code on the side or even test automation. I mean an honest-to-goodness developer with a development manager, deadlines, and code reviews. You will become  a better tester. And when you make the move you will already have a leg up on development because you will probably be the only developer on your team to actually test your fixes thoroughly before checking them into the trunk! Actually, I take that back. Now that I am a developer, I can see that the problem is that devs are typically focused on a very narrow scope of functionality. This fact re-enforces my belief that developers shouldn’t also be responsible for testing, if money and time allow.

I don’t know how long I will wallow around over here on the “dark side”. Given my natural ability to become bored after a only a single repetition of anything, I would imagine that the first time I get chewed out for not documenting my code or stuck interpreting requirements into technical specifications, I might start thinking that…maybe those all-day meetings really weren’t that bad-especially when they ordered in those  egg-salad sandwiches from Jason’s Deli. I could eat a hundred of those.

The Intuitive Manager

As my first post on Yowsbrain (left) pertaining to management, which I think also is applicable to project management, I thought I would pick a topic close to home. A few years ago, our company invested in some “profiling” services in an effort to increase the effectiveness of new hires by determining our current team’s “work style” makeup. The purpose for this exercise was to help us identify our individual conflict styles, decision-making styles and how these change under stress. The first goal was to see how our team was currently balanced. We could then, theoretically, use this information to hire people who would either fit into, or fill gaps in our team, thereby increasing our effectiveness and reducing turnover.

I won’t get into how that whole experiment went. But I did learn a few things about myself along the way that were confirmed in another round of testing that I put myself through with the help of a career specialist located here in Austin named Dr. David Litton. Specifically, I learned that I have been blessed (or cursed) with the gift of intuition. My natural tendency when making decisions is to quickly gather information, quickly analyze it, and then go with my gut. The “I-Speak Your Language” model of defining behavior styles, which is based on the Myers-Briggs Type Indicator (MBTI)  taps me primarily as an “Intuitor”. I should also mention that the tests revealed that my secondary behavior style is that of a “Thinker”. This is a good secondary trait to have, but since my primary is intuitor, I want to focus this blog entry on that type of manager.

The decision-making process used by an intuitor relies heavily on past experience and gut feelings. I suppose that this justifies, for me anyway, the high salaries of many upper-level managers. Their “gut” is their value-add. This style of decision making contrasts with with those leaders who rely exclusively on data and the current state-of-affairs. These are folks with a primary “Thinker” type behavior style. Which style of manager  is best? From my perspective, it depends on the types of decisions being made by that particular manager paying heed to the age, size, purpse and culture of a given organization.  A young organization working with cutting edge technology where gobs of data, models and experience are unavailable to the decision-maker, an intuative manager would be a wise choice. In a more established organization with years of data, facts, models, trends and other information available to the decision-maker, one might want to look at a “Thinker”.

For any type of manager making a critical decision, I feel it is wise to collaborate or at least get the opinion of another stakeholder or stakeholders that may have a different behavior style. According to my graduate professors at the University of Texas, diversity is key in successful teams.  They were not talking about “ethnic” diversity. In general, they were referring to “style” diversity. I don’t want to go too heavily into teams in this blog entry, however. My point is simply that since one style is better under some circumstances than others, one should leverage the value of style diversity as inputs when weighing a decision.

For example, let’s say I have a rather important hiring decision to make. I have a pool of candidates to choose from that I have selected using my “intuative” style of decision making. It is very efficient for me to use my “gift” (in this case) to quickly narrow down my choices. At this point, however, I need to confirm that the data I used and then the gut feeling that I trusted were both valid. For this, I usually ask someone who is a “Thinker” or even a “Sensor” (someone who is more sensitive to feelings and emotions than I am) to evaluate the candidates. The very best scenario involves each type of behavior style weighing in on the candidate.

So what are the positive contributions that an intuitor can make as a leader in their organization? The ability to solve problems quickly and creatively ranks high on the list. As this “idea-oriented” behavior pattern would indicate, an intuitor has faith that the best solution will inevitably present itself and that their job is to be there to recognize it, implement it and move on to the next problem. As a software QA manager, I am naturally drawn to automation, virtualization and cutting-edge testing techniques like Exploratory or Session-Based testing. I can’t stand to see my fellow team-mates or subordinates doing things the “hard way”.  If I have to do something more than once, it gets automated, period. If I have to wait a week to get a server configured, I am volunteering to run the project team dedicated to virtualizing  our test environments so that we can remove this dependency on our operations team so that we can do our job without being held up next time. I’ll take the risk that the solution might fail because the status quo is simply not acceptable. Rarely do they fail, however, because my ituition guides me through these unchartered waters that would drive a “Thinker” or “Feeler” insane. I’m comfortable in the unknown because I trust my instincts.  The other behavior styles rely primarily on feedback and data…neither of which are always available in the high-tech arena.

Another strength trait of the intuitor is the ability to see the future. I call this my “sooth-sayer” sense. It’s not so much that an intuitor can see the future as much as it is that he or she can take facts and project them out further than most people. Their vision is not normally clouded by the fog created by tight deadlines, fuzzy data or being “in the weeds”.  Intuitors  can see how things will fit together down the line in spite of these distractions. Sometimes, being under stress even helps the intuitor to focus on creative solutions with more zeal and determination than perhaps a situation that was less severe.   I have been able to help steer our organization away from bad decisions, bad technology and poor hires because of this skill. But intuitors beware. This “asset” can make other people on your management staff uncomfortable at times. I would advise all intuitors to keep a journal. It will help your credibility in the future if you can show that your “gut” is something on which your peers can trust and decisions can be made.

Intuitive managers should use facts when they can. But sometimes, technology is just too new or new roads are being paved and there just isn’t much to go on. In this case, having an intiuitor on your staff is invaluable. Intuitors should also be called upon to assist in long-range planning. Like I mention above, getting an unclouded view of the future is difficult for most leaders-especially those who are constantly distracted by the latest stock forecast or budgetary constraints. And if you are an entrepreneur looking to grow your business, you couldn’t ask for a better partner than an intuitor. They will see what you can’t. They will be objective and tell you what you may not want to hear. But you need to hear it. Heed their warnings! Take comfort in their direction. And so I’ll end this little expose with a message to my fellow intuative managers. Then again, you know what I was going to say…right?

“Gotchas” to Avoid When Using Hudson with VSS

While a most of the hot-shot software companies have charged into source control management for the twenty-first century by using systems such as Git and Subversion, many of us “back-orifice” workers, whether we are a part of an IT staff supporting a legacy system, or are part of a development team that can’t seem to  make time for an upgrade, are stuck using antiquated source control systems like VSS or even Star Team. Thankfully, some of the other new-fangled tools which boost software quality, team productivity, and reduce development cycles, have kept us “grey suits” in their sprint planning. Either that, or they have made their platforms extensible enough so that other saps like me can write plugins to adapt to our situation.  One of these systems is Sun’s fantastically simple-to-use continuous integration tool, written by Kohsuke Kawaguchi , known simply as “Hudson“.

There are blog posts a-plenty on installing and configuring Hudson…even on a (gasp!) Windows environment. Again, not my choice. But one that I have to live with if I expect our IT staff to back up and maintain my servers. When I went looking for more information on running Hudson with VSS, I came up empty.  The point of this brief post is to go into some of the details and “Gotcha!”s that I came across while setting this up for our nightly integration build and test processes.

A quick glance at my setup will reveal Hudson being served up on the Winstone Web server (that comes bundled with Hudson) running on Microsoft Windows 2003. I have VSS insalled as a component of Visual Studio and so it lives under “C:\Program Files\Microsoft Visual Studio\VSS\win32″.  This is important as most of your VSS work, whether being performed by Ant, Hudson,  or from the command window will be performed by the command-line based “ss.exe”, and not the VSS client. I have Hudson running at the root of “C:\” and the various required versions of Java, ant and other tools and compilers installed in every other crack and crevice on this machine. Did I mention this is a “virtual” machine? Indeed, this machine is a guest VM on a VMWare®  ESXi server. So far, we have not had any major issues with this setup.

This VM is  currently building all of our applications . In a future blog entry, I will discuss our plans for virtualizing each of the build “slaves” that will eventually free up resources on this VM and allow us to perform both simultaneous builds and real-time integration testing. But for now, storage is at a premium  since all of our products live in separate repositories. Each repository has to be checked out at the root level in order to do a build. This takes up lots of space and is a very inefficient way to handle source code…but VSS has limitations that we can’t seem to work around. Specifically, its weak merging and tagging capabilities render it useless for these purposes. It is because of this that the build jobs are forced to modify environment variables among other things as a precursor to any build being executed.

I also have the VSS plugin installed and configured to “talk” with our repositories. This plugin has proven to work very will for polling the repository and monitoring usage. It is important to note that this plugin does not handle “checkouts” or “gets” from VSS. It will, however, check for Ant files in the root of your repository. If you have Ant tasks using <vssget> to pull your source down, you simply call into that from a Hudson task and get your source. We don’t have these tasks written, so I “manually” call out to vss.exe directly from a Hudson command line script like this:

set SSDIR=\\vsshost\VSS\Product\Product
ss get $/Project -RQ

I will get to why I pass the -Q switch in a minute…although it doesn’t do exactly what I want. This seems to work as long as you are aware of the “gotchas”. These gotchas may or may not happen with an Ant task as well since Ant uses ss.exe under the covers. Now, on to the “Gotchas”:

Gotcha #1 – Environment

The first thing you need to check if you have a setup similar to mine is your environment variables. As you can see above,  ss.exe depends on “SSDIR”. If you are doing multiple builds on a single machine, it will be necessary to modify this environment variable each time you do a checkout from a different repository. Hopefully, your development team doesn’t create new physical repositories when they “branch”. But ours does. The other biggie is simply to make sure “ss.exe” is in your system path (especially if you are building on a slave).  It’s a minor detail, but got me once or twice.

Gotcha #2 – Checkout

VSS gets hung up during the build process if  \workspace directory has read-only properties.  The workaround is to use the windows “attrib” utility to set permissions on this directory. The biggest problem with this and some of the other gotchas is that there is no feedback to the Hudson console view. The build will just hang on “getting latest source from vss…”  If you are not blowing away your workspace directory with each new build, you can set this permission manually and Windows should retain permissions on all subsequent newly checked out directories, files and sub-directories.

Gotcha # 3 – Quiet Checkout

Whenever you set up a new build job and do a vss checkout, you may get prompted to re-set your working directory.  Again, there is no notification to the Hudson console that this is the problem. But if you go try to manually do a checkout using ss.exe from your \workspace directory, you will see the prompt. Once you set this (say “Yes”), the builds will not stop on this step again. Even still, I tried to find a “-quiet” parameter for ss.exe. There is, but it doesn’t seem to work. I still get prompted.

Gotcha # 4 – Multiple Confirmation Questions

This gotcha follows closely behind #3 because it got me even though I was aware of #3. In fact, sometimes when you change repositories, VSS will prompt you twice to confirm that you indeed want to change your working directory. I am not really sure why Microsoft thinks I can’t make up my mind about where I want my source to go, but they really want to make sure I have the safety off before pulling the trigger.

I don’t know if these few tips will help anyone else in the community of Hudson users. But I think there are more software shops than we would like to admit who are still using VSS. Hopefully, these tips will help get you past the common problems with using VSS with Hudson.

A Data-Driven Automated Test Design Using Watir and Rasta

I originally wrote this as a white paper, but have decided to go ahead and put it here in my left-brain blog as well. Watir and Rasta work very well together and I hope this brief explanation of how I implemented my solution is useful for some folks. Anyway, here goes…

In order to remain flexible to change and provide the scalability necessary  to drive hundreds of variations of the same test case though a system under test, test automation which drives software applications from the presentation layer must be designed such that variable data and application object information is separated from the automation code. Because of its flexibility, rich library of modules and enthusiastic user community, Ruby, an open-source, pure object-oriented and interpreted development language, has been adopted by many automation engineers in the Software Quality Assurance industry. Ruby extensions like Watir and Rasta have been created by the open-source community to meet these requirements for test frameworks with minimal effort and zero cost. This paper describes just one way to leverage these open-source Ruby “gems”, as well as the OO properties of Ruby, to quickly create a flexible and scalable automated test suite.

I. Introduction

Given the time and effort many organizations now devote to test automation, software test automation engineers must frequently ask themselves if they are getting the most test coverage possible out of the time spent developing their automated test solutions.  Many times, I have worked the “problem” of driving our various AUT (Application(s) Under Test)  through their “green path” only to realize that after successfully accomplishing this task, I had not created very many real tests! Sure, driving the application itself through various high-level use cases is a valuable form of regression. But I tend to get so hung up in making our functional test framework flexible and scalable that sometimes I have to stop and remind myself to take advantage of the opportunities that my whiz-bang test design has provided.

As automated test developers, we need to be digging deeper into the various parts of the AUT whenever it is practical to do so. The flexible tools we create need to be leveraged for all they are worth, not just appreciated for their glorious agility. Neither the developers nor the management team care how elegant and flexible our frameworks are. As another colleague of mine, Jim Mathews, likes to say “They just want to see lots of “green” cells in a spreadsheet”.

II. Application Under Test

For the remainder of this post, I will discuss various thought processes, decisions and automation solutions as they pertain to a particular Website that I describe in this section.

My company sells insurance policies. Our primary “Web Portal” is used by insurance agents to remotely enter data which will return to them a quote for the insurance policy that has been customized based on their selections in the application wizard. The returned quote includes an estimated premium and commission for the agent.

The key feature in the Web application is the quote-generation “wizard”. It is this feature that will enable the strategy outlined in this paper to work.

Many Websites use some sort of a “wizard” approach to finding a solution for their user. Shopping cart applications (, store locators ( and car value estimators ( are just a few examples of how Web wizards drive customers to their desired information, purchase or answer. It is this uniformity of this Web design pattern that allows the test automation approach described in this paper to have a more global application.

III. Initial Solution

Recently, I completed a first pass through our latest Web application. I was quite proud of myself for successfully leveraging a new Ruby-based data-driven test framework called “Rasta” [1]. Rasta was developed for the open-source community by Hugh McGowan and Rafael Torres. The acronym stands for “Ruby Spreadsheet Test Automation”. I used Rasta to assist in driving my company’s Web portal application through the creation of an insurance claim.

Rasta is a FIT-like framework for developing data-driven test cases in Excel. It allows a test developer to extract the data needed to run various test cases from the test code which, in the case of Rasta, is known as a test “fixture”. The test fixture is the code which looks into the spreadsheet in order to determine which test methods will be executed and which data will be used in those test methods.

Test cases are executed based on the organization of the spreadsheet. The execution order is from top-left of the first worksheet, to the bottom-right of the last worksheet in the target workbook. The individual worksheet tab names refer to the classes you develop in your Ruby-based test “fixture”. You can design your individual test procedures any way you wish within a horizontal or vertical construct in Excel. This means that your first design decision is to decide how you want to lay out your test cases. I typically choose vertical approach-meaning test cases are isolated into columns that get executed from left to right (see Figure 1).

TFAT Test Cases

Figure 1 – Test Case Layout

This approach saves me from having to scroll left and right so much whenever I am designing and testing the framework. If you are used to writing your test frameworks from a test/unit perspective (like most Watir and test/unit tutorials describe), using Rasta will force you to think a bit differently about how you organize your test objects in your Ruby code. The test designer needs to look at the spreadsheet to set up the order that test cases will be executed instead of the order in which test methods are laid out in test/unit classes. Data is passed from the spreadsheet into the test fixture (which is simply a collection of Ruby classes) via standard Ruby accessors. Methods are invoked by Rasta in the order they are listed on the spreadsheet.  Once again, organization in the spreadsheet matters because it reflects the order in which your test methods that do the action on the AUT will be executed. If you don’t get this right, you will be trying to hit a “Pay” button before logging in!

One of the neatest features about using Rasta, and this goes for any data-driven test framework that uses Excel, is that the test developer can leverage all of the built-in features of Excel to help simplify test case creation. This comes in handy when handing the test framework off to a non-developer for maintenance and additional test case creation.

For example, let’s say a form in your AUT contains several drop-down selection boxes. These selection options are, by design, limited by the configuration defined by the AUT (i.e. a user can only select “Texas”, “Montana” or Oregon” within the “State” selection box). When designing the test scenario in Rasta, these values can be linked via Excel’s built-in cell validation feature so that only valid option values can be selected as test case attributes in that particular cell in the spreadsheet. Figure 2 shows how this “lookup” table may be laid out.

TFAT - Lookup Tables

Figure 2 – Select Options

The column headers represent the “name” which is given to the data set listed below. This name is defined in Excel and is associated with this set of cells. Excel has pretty good instructions in the built-in help for how to create these cell range “names”. Once a name (cell range) is defined, the Excel cell validation feature can then be used in the cells of your test scenarios (Figure 1) to restrict the data in your test scenario to only those values that are valid in the AUT.

Even though this approach take more up-front effort, once developed, it  allows a tester to simply copy the whole test scenario over to a new column which generates a new, unique test case in a matter of seconds by merely toggling one or more of the pre-defined cell options. In this way, the test developer can generate test cases without even having access to the AUT. One can put even more logic here so that the allowable values are further restricted based on other selections made in that particular test case. This, too, could be done using Excel formulas for cell validation.

III. Design Improvements

In the Introduction section of this discussion, I described a problem pattern that many test automators, myself included, seem to be prone to repeating over and over again. That pattern is to develop an automation framework where we drive our AUT with a broad use-case in mind. We are interested mostly in driving the app with “no hands” instead of designing our frameworks to dig deep into each particular feature.

Lucky for me, I am privileged have the opportunity to work with colleagues who dwarf my test experience by many years. These are true testers at heart, whereas I tend to get carried away by the exercise of test development.  When I presented my framework to fellow tester and past author of articles in Better Software Magazine among others, Barton Layne, he exposed my lack of vision for testing the AUT and selfish desire to automate and not test.

I knew that I had fallen into the trap I describe above. I was carried away with the “cool” factor and didn’t see the opportunity to “dig deep” that was staring me squarely in the face. I’m not one to back away run away from a challenge. So I went back to think more on how I could get the test framework more “vertical”.

The first problem that quickly became apparent to me was the way I had implemented my tests in the spreadsheet. In my initial design, each use case is represented by a tab (worksheet) in the spreadsheet. Rasta looks at the name of each worksheet to determine in which test fixture (class) to look in order to find the methods called in that worksheet. I had initially decided that with this approach, I would end up with fewer spreadsheets (possibly even one master spreadsheet) with one use-case per tab.  I could then add variation to each use case in the form of test cases. Each use case would correspond to a single Ruby class in the fixture files.

I knew something was wrong when my first class ended up with 50 data objects each requiring its own accessor and a corresponding definition in column two of my spreadsheet. It was obvious that the ten or so test methods required for executing this use case was trying to do too much within a single fixture. It was difficult to read and manipulate. It also seemed fragile.  It was true, however, that each test case did a lot of work in the AUT. It cranked through our entire process to create an insurance quote. I could easily copy each column into a new test case, change a bit here or there with a drop down linked to my value list and another path down that use could be created.

But what if I just wanted run through five new failure conditions at step 3? With this current architecture, this scenario would leave much of my test fixture un-used. This isn’t a huge deal, but I wanted to be able to more easily identify and isolate in the fixture where I would make code changes based on future changes in the AUT. My first approach would send me back to the massive test fixture where I would surely fat-finger a random character somewhere else in the class and break the whole thing Instead of defining each worksheet (and subsequently class) in my framework as a use-case, I felt that I could go much deeper using the same basic concept if I backed out a bit and defined the use case at the workbook level instead. This approach would allow me to build test fixtures that correspond to each state or step in the process of creating my insurance quote (see Figure 4). I could then drill down at each step in the process instead of being tethered to the overall use case. Plus, it would be much easier to add detailed test cases at this level. In other words, I could “go deep”. With this new design approach, the parameters for all test cases on a given worksheet/fixture are the same which makes it easier for me to see and debug problems.

The critical flaw in my original design had to do with me thinking, for some reason, that everything should be run from a single spreadsheet. Why? Who says? As a developer, one can sometimes get overly concerned with avoiding repetition, associating it with duplication of code. Since using Rasta ties the design to the spreadsheets, I was thinking that duplicating the inputs and outputs in separate spreadsheets, even though each spreadsheet would be executing an entirely different use case, would be an undesirable approach. In reality, data-driven tests by their very nature are designed to be executed in this fashion. My thoughts are validated in the book  “Lessons Learned in Software Testing” [2], where the authors write “After you’ve put together a data-driven test procedure, you can use it again and again to execute new tests. The technique is most powerful with product workflows that have lots of different data options.” This is the case in this particular situation. There are few flows, but lots of data options. Breaking out these options into logical test fixtures just makes sense to me.

My next step was to break out the test fixtures into the various states of the quote creation process so they would correspond to my new spreadsheet layout. These states are (for simplicity sake) Login -> Enter High-Level Info (Step 1) -> Enter Detailed Item Data (Step 2) -> Rate Quote (Step 3) and check Return Values (Step 4). I simply had to break my “once through” fixture into all of its smaller “states”.

It soon becomes apparent that by the time I reach Step 1, I still need all of the objects that I created in the Login fixture. This is so I can execute a positive path through Login to get me to Step 1. What to do? I figured now was as good a time as any to leverage some of the power of the object oriented (OO) features with which, since falling in love with Ruby, I have recently become obsessed.  In a nutshell, I had each test fixture inherit all of the objects from the test fixture performing the test steps immediately before it. In this way, I was able to isolate each step into its own class/fixture and all methods are available at each step to: 1) get to the step and 2) execute tests on that step. The nice thing is that each fixture contains only those objects needed for executing tests at that specific step. All other objects called from Rasta to “get you there” in the AUT live in the superclass from which each fixture has inherited all of its attributes. See Figure 5 for an illustration of how this is done.

class TestFixture < TestFixture1

def initialize


@@gui = #initialize gui map



Figure 5 – Setup Inheritance and Create GUI Map Object

The “super” keyword is used to set up the parent/child hierarchy. I have not had to yet, but all classes can still be over-loaded, if necessary.  Rasta includes the FIT-like before_all, before_each, after_all and after_each methods to handle the setup and teardown of each test case. I do all of the setup and teardown in the “Login” fixture since it is always a parent class. I question this design decision and may eventually pull this out into its own namespace. Still, it has not proven to be a problem after completing a few hundred test cases. I also initialize my global variables here. While using global variables is generally a bad programming practice, with test automation, sometimes it is a necessary evil.

Since Rasta is tied to the class objects in the test fixture, it is not possible to simply include your other test fixtures in the top of your “child” fixtures and expect Rasta to be able to recognize them. It concerned me a bit when I started thinking about this because I thought setting up this object model was completely unnecessary after all. I was relieved when I commented out the “<” and everything broke!

Another design decision that I struggled with was whether or not to use a “GUI map”. I wanted to have one place where all of my GUI objects would be defined with their respective Watir identification data. According to the author of Watir, Bret Pettichord, “Watir is an open-source library for automating web browsers. It allows you to write tests that are easy to read and maintain. It is simple and flexible.” [5] By keeping most of the Watir calls in one place, I could more easily maintain changes to the AUT.

I decided to lift another strategy from one of my colleagues, Jim Mathews, and put the GUI map in a separate class (this could also be done as a module). All buttons, fields, menus, tables, etc…are defined as methods in this class. This class is instantiated at the beginning of each child class as an instance variable (see Figure 5). When you need to act on these objects, simply pass the Watir method that you want to pass on the object itself. This approach follows the “Façade” design pattern by creating an interface class to the GUI elements and abstracting out the actual Watir code that defines these elements. I used a naming convention that identifies the objects (e.g. btn_xxx, tbl_xxx, mnu_xxx, etc…). This approach allows for reuse of objects that have already been defined. For example, I was able to reuse the “Next” button definition in each fixture that defined a step in the Wizard process. If our development team decides to change this to a hyper-link or even change the title of the button to “Next Step”, all that has to be done is change the single reference in the GUI map.

After a while, however, the GUI map class does become large. Using a good IDE like Netbeans [3] or Eclipse [4] can make navigating your GUI map class a breeze. Also, one could add rdoc comments to this GUI. I plan to use some creative meta-programming in the future to slim this down even more so that the GUI Map methods can be self-creating.

III. Conclusion

This is just one of a number of ways to handle this particular problem. The nice thing is that I can now easily identify where to make my changes in the framework whenever changes to the AUT occur. This approach also makes debugging my fixtures and test cases much simpler.  As an added bonus, I can also do much more specific regression testing. If a change is deployed in just one of these steps (and that is how they usually come down), I know where to go to add tests or test methods.  I can also run the regression in isolation. I am certain there are many things I have overlooked or taken the “long way” to achieve. But it works. And it allows me to hand a spreadsheet to my boss with many more “colored” cells on it!


[1]   RASTA. 2008.

[2] Pettichord, Bach, Kaner. Lessons Learned in Software Testing. New York. Wiley, 2002

[3] Welcome to Netbeans. 2008

[4] Home. 2008

[5]  WATIR.  2008.


Welcome to yowsbrain.left! My name is Gregg Yows and I am a software developer and automated test engineer. This part of my Website is dedicated to the stuff that exercises the left half of my brain (check out Yowsbrain Right for the stuff that keeps me alive). Specifically, this left brain “stuff” includes software development, software testing, home construction projects, auto-repair, gardening, beer-making (questionably a left-brained excercise), ammunition reloading and all other technical challenges that I should desire to address in my day-to-day life. In both work-related and non-work related technical activities, I strive for excellence. Why do things “half-ass”? … as we say here in Texas.

This site has been created as place for me to store my technical  “notes”. I hope that it will serve both the software community and Internet community as a whole with information that will help folks become better software testers, developers, mechanics, gardeners, reloaders or brew-masters.  As I grow and learn, I want to help others do the same.  Please feel free to contact me with any comments, questions or suggestions at