A Data-Driven Automated Test Design Using Watir and Rasta

I originally wrote this as a white paper, but have decided to go ahead and put it here in my left-brain blog as well. Watir and Rasta work very well together and I hope this brief explanation of how I implemented my solution is useful for some folks. Anyway, here goes…

In order to remain flexible to change and provide the scalability necessary  to drive hundreds of variations of the same test case though a system under test, test automation which drives software applications from the presentation layer must be designed such that variable data and application object information is separated from the automation code. Because of its flexibility, rich library of modules and enthusiastic user community, Ruby, an open-source, pure object-oriented and interpreted development language, has been adopted by many automation engineers in the Software Quality Assurance industry. Ruby extensions like Watir and Rasta have been created by the open-source community to meet these requirements for test frameworks with minimal effort and zero cost. This paper describes just one way to leverage these open-source Ruby “gems”, as well as the OO properties of Ruby, to quickly create a flexible and scalable automated test suite.

I. Introduction

Given the time and effort many organizations now devote to test automation, software test automation engineers must frequently ask themselves if they are getting the most test coverage possible out of the time spent developing their automated test solutions.  Many times, I have worked the “problem” of driving our various AUT (Application(s) Under Test)  through their “green path” only to realize that after successfully accomplishing this task, I had not created very many real tests! Sure, driving the application itself through various high-level use cases is a valuable form of regression. But I tend to get so hung up in making our functional test framework flexible and scalable that sometimes I have to stop and remind myself to take advantage of the opportunities that my whiz-bang test design has provided.

As automated test developers, we need to be digging deeper into the various parts of the AUT whenever it is practical to do so. The flexible tools we create need to be leveraged for all they are worth, not just appreciated for their glorious agility. Neither the developers nor the management team care how elegant and flexible our frameworks are. As another colleague of mine, Jim Mathews, likes to say “They just want to see lots of “green” cells in a spreadsheet”.

II. Application Under Test

For the remainder of this post, I will discuss various thought processes, decisions and automation solutions as they pertain to a particular Website that I describe in this section.

My company sells insurance policies. Our primary “Web Portal” is used by insurance agents to remotely enter data which will return to them a quote for the insurance policy that has been customized based on their selections in the application wizard. The returned quote includes an estimated premium and commission for the agent.

The key feature in the Web application is the quote-generation “wizard”. It is this feature that will enable the strategy outlined in this paper to work.

Many Websites use some sort of a “wizard” approach to finding a solution for their user. Shopping cart applications (www.amazon.com), store locators (www.pizzahut.com) and car value estimators (www.kbb.com) are just a few examples of how Web wizards drive customers to their desired information, purchase or answer. It is this uniformity of this Web design pattern that allows the test automation approach described in this paper to have a more global application.

III. Initial Solution

Recently, I completed a first pass through our latest Web application. I was quite proud of myself for successfully leveraging a new Ruby-based data-driven test framework called “Rasta” [1]. Rasta was developed for the open-source community by Hugh McGowan and Rafael Torres. The acronym stands for “Ruby Spreadsheet Test Automation”. I used Rasta to assist in driving my company’s Web portal application through the creation of an insurance claim.

Rasta is a FIT-like framework for developing data-driven test cases in Excel. It allows a test developer to extract the data needed to run various test cases from the test code which, in the case of Rasta, is known as a test “fixture”. The test fixture is the code which looks into the spreadsheet in order to determine which test methods will be executed and which data will be used in those test methods.

Test cases are executed based on the organization of the spreadsheet. The execution order is from top-left of the first worksheet, to the bottom-right of the last worksheet in the target workbook. The individual worksheet tab names refer to the classes you develop in your Ruby-based test “fixture”. You can design your individual test procedures any way you wish within a horizontal or vertical construct in Excel. This means that your first design decision is to decide how you want to lay out your test cases. I typically choose vertical approach-meaning test cases are isolated into columns that get executed from left to right (see Figure 1).

TFAT Test Cases

Figure 1 – Test Case Layout

This approach saves me from having to scroll left and right so much whenever I am designing and testing the framework. If you are used to writing your test frameworks from a test/unit perspective (like most Watir and test/unit tutorials describe), using Rasta will force you to think a bit differently about how you organize your test objects in your Ruby code. The test designer needs to look at the spreadsheet to set up the order that test cases will be executed instead of the order in which test methods are laid out in test/unit classes. Data is passed from the spreadsheet into the test fixture (which is simply a collection of Ruby classes) via standard Ruby accessors. Methods are invoked by Rasta in the order they are listed on the spreadsheet.  Once again, organization in the spreadsheet matters because it reflects the order in which your test methods that do the action on the AUT will be executed. If you don’t get this right, you will be trying to hit a “Pay” button before logging in!

One of the neatest features about using Rasta, and this goes for any data-driven test framework that uses Excel, is that the test developer can leverage all of the built-in features of Excel to help simplify test case creation. This comes in handy when handing the test framework off to a non-developer for maintenance and additional test case creation.

For example, let’s say a form in your AUT contains several drop-down selection boxes. These selection options are, by design, limited by the configuration defined by the AUT (i.e. a user can only select “Texas”, “Montana” or Oregon” within the “State” selection box). When designing the test scenario in Rasta, these values can be linked via Excel’s built-in cell validation feature so that only valid option values can be selected as test case attributes in that particular cell in the spreadsheet. Figure 2 shows how this “lookup” table may be laid out.

TFAT - Lookup Tables

Figure 2 – Select Options

The column headers represent the “name” which is given to the data set listed below. This name is defined in Excel and is associated with this set of cells. Excel has pretty good instructions in the built-in help for how to create these cell range “names”. Once a name (cell range) is defined, the Excel cell validation feature can then be used in the cells of your test scenarios (Figure 1) to restrict the data in your test scenario to only those values that are valid in the AUT.

Even though this approach take more up-front effort, once developed, it  allows a tester to simply copy the whole test scenario over to a new column which generates a new, unique test case in a matter of seconds by merely toggling one or more of the pre-defined cell options. In this way, the test developer can generate test cases without even having access to the AUT. One can put even more logic here so that the allowable values are further restricted based on other selections made in that particular test case. This, too, could be done using Excel formulas for cell validation.

III. Design Improvements

In the Introduction section of this discussion, I described a problem pattern that many test automators, myself included, seem to be prone to repeating over and over again. That pattern is to develop an automation framework where we drive our AUT with a broad use-case in mind. We are interested mostly in driving the app with “no hands” instead of designing our frameworks to dig deep into each particular feature.

Lucky for me, I am privileged have the opportunity to work with colleagues who dwarf my test experience by many years. These are true testers at heart, whereas I tend to get carried away by the exercise of test development.  When I presented my framework to fellow tester and past author of articles in Better Software Magazine among others, Barton Layne, he exposed my lack of vision for testing the AUT and selfish desire to automate and not test.

I knew that I had fallen into the trap I describe above. I was carried away with the “cool” factor and didn’t see the opportunity to “dig deep” that was staring me squarely in the face. I’m not one to back away run away from a challenge. So I went back to think more on how I could get the test framework more “vertical”.

The first problem that quickly became apparent to me was the way I had implemented my tests in the spreadsheet. In my initial design, each use case is represented by a tab (worksheet) in the spreadsheet. Rasta looks at the name of each worksheet to determine in which test fixture (class) to look in order to find the methods called in that worksheet. I had initially decided that with this approach, I would end up with fewer spreadsheets (possibly even one master spreadsheet) with one use-case per tab.  I could then add variation to each use case in the form of test cases. Each use case would correspond to a single Ruby class in the fixture files.

I knew something was wrong when my first class ended up with 50 data objects each requiring its own accessor and a corresponding definition in column two of my spreadsheet. It was obvious that the ten or so test methods required for executing this use case was trying to do too much within a single fixture. It was difficult to read and manipulate. It also seemed fragile.  It was true, however, that each test case did a lot of work in the AUT. It cranked through our entire process to create an insurance quote. I could easily copy each column into a new test case, change a bit here or there with a drop down linked to my value list and another path down that use could be created.

But what if I just wanted run through five new failure conditions at step 3? With this current architecture, this scenario would leave much of my test fixture un-used. This isn’t a huge deal, but I wanted to be able to more easily identify and isolate in the fixture where I would make code changes based on future changes in the AUT. My first approach would send me back to the massive test fixture where I would surely fat-finger a random character somewhere else in the class and break the whole thing Instead of defining each worksheet (and subsequently class) in my framework as a use-case, I felt that I could go much deeper using the same basic concept if I backed out a bit and defined the use case at the workbook level instead. This approach would allow me to build test fixtures that correspond to each state or step in the process of creating my insurance quote (see Figure 4). I could then drill down at each step in the process instead of being tethered to the overall use case. Plus, it would be much easier to add detailed test cases at this level. In other words, I could “go deep”. With this new design approach, the parameters for all test cases on a given worksheet/fixture are the same which makes it easier for me to see and debug problems.

The critical flaw in my original design had to do with me thinking, for some reason, that everything should be run from a single spreadsheet. Why? Who says? As a developer, one can sometimes get overly concerned with avoiding repetition, associating it with duplication of code. Since using Rasta ties the design to the spreadsheets, I was thinking that duplicating the inputs and outputs in separate spreadsheets, even though each spreadsheet would be executing an entirely different use case, would be an undesirable approach. In reality, data-driven tests by their very nature are designed to be executed in this fashion. My thoughts are validated in the book  “Lessons Learned in Software Testing” [2], where the authors write “After you’ve put together a data-driven test procedure, you can use it again and again to execute new tests. The technique is most powerful with product workflows that have lots of different data options.” This is the case in this particular situation. There are few flows, but lots of data options. Breaking out these options into logical test fixtures just makes sense to me.

My next step was to break out the test fixtures into the various states of the quote creation process so they would correspond to my new spreadsheet layout. These states are (for simplicity sake) Login -> Enter High-Level Info (Step 1) -> Enter Detailed Item Data (Step 2) -> Rate Quote (Step 3) and check Return Values (Step 4). I simply had to break my “once through” fixture into all of its smaller “states”.

It soon becomes apparent that by the time I reach Step 1, I still need all of the objects that I created in the Login fixture. This is so I can execute a positive path through Login to get me to Step 1. What to do? I figured now was as good a time as any to leverage some of the power of the object oriented (OO) features with which, since falling in love with Ruby, I have recently become obsessed.  In a nutshell, I had each test fixture inherit all of the objects from the test fixture performing the test steps immediately before it. In this way, I was able to isolate each step into its own class/fixture and all methods are available at each step to: 1) get to the step and 2) execute tests on that step. The nice thing is that each fixture contains only those objects needed for executing tests at that specific step. All other objects called from Rasta to “get you there” in the AUT live in the superclass from which each fixture has inherited all of its attributes. See Figure 5 for an illustration of how this is done.

class TestFixture < TestFixture1

def initialize

super

@@gui = ApGuiMap.new #initialize gui map

end

end

Figure 5 – Setup Inheritance and Create GUI Map Object

The “super” keyword is used to set up the parent/child hierarchy. I have not had to yet, but all classes can still be over-loaded, if necessary.  Rasta includes the FIT-like before_all, before_each, after_all and after_each methods to handle the setup and teardown of each test case. I do all of the setup and teardown in the “Login” fixture since it is always a parent class. I question this design decision and may eventually pull this out into its own namespace. Still, it has not proven to be a problem after completing a few hundred test cases. I also initialize my global variables here. While using global variables is generally a bad programming practice, with test automation, sometimes it is a necessary evil.

Since Rasta is tied to the class objects in the test fixture, it is not possible to simply include your other test fixtures in the top of your “child” fixtures and expect Rasta to be able to recognize them. It concerned me a bit when I started thinking about this because I thought setting up this object model was completely unnecessary after all. I was relieved when I commented out the “<” and everything broke!

Another design decision that I struggled with was whether or not to use a “GUI map”. I wanted to have one place where all of my GUI objects would be defined with their respective Watir identification data. According to the author of Watir, Bret Pettichord, “Watir is an open-source library for automating web browsers. It allows you to write tests that are easy to read and maintain. It is simple and flexible.” [5] By keeping most of the Watir calls in one place, I could more easily maintain changes to the AUT.

I decided to lift another strategy from one of my colleagues, Jim Mathews, and put the GUI map in a separate class (this could also be done as a module). All buttons, fields, menus, tables, etc…are defined as methods in this class. This class is instantiated at the beginning of each child class as an instance variable (see Figure 5). When you need to act on these objects, simply pass the Watir method that you want to pass on the object itself. This approach follows the “Façade” design pattern by creating an interface class to the GUI elements and abstracting out the actual Watir code that defines these elements. I used a naming convention that identifies the objects (e.g. btn_xxx, tbl_xxx, mnu_xxx, etc…). This approach allows for reuse of objects that have already been defined. For example, I was able to reuse the “Next” button definition in each fixture that defined a step in the Wizard process. If our development team decides to change this to a hyper-link or even change the title of the button to “Next Step”, all that has to be done is change the single reference in the GUI map.

After a while, however, the GUI map class does become large. Using a good IDE like Netbeans [3] or Eclipse [4] can make navigating your GUI map class a breeze. Also, one could add rdoc comments to this GUI. I plan to use some creative meta-programming in the future to slim this down even more so that the GUI Map methods can be self-creating.

III. Conclusion

This is just one of a number of ways to handle this particular problem. The nice thing is that I can now easily identify where to make my changes in the framework whenever changes to the AUT occur. This approach also makes debugging my fixtures and test cases much simpler.  As an added bonus, I can also do much more specific regression testing. If a change is deployed in just one of these steps (and that is how they usually come down), I know where to go to add tests or test methods.  I can also run the regression in isolation. I am certain there are many things I have overlooked or taken the “long way” to achieve. But it works. And it allows me to hand a spreadsheet to my boss with many more “colored” cells on it!

References

[1]   RASTA. http://rasta.rubyforge.org. 2008.

[2] Pettichord, Bach, Kaner. Lessons Learned in Software Testing. New York. Wiley, 2002

[3] Welcome to Netbeans. http://www.netbeans.org/. 2008

[4]  Eclipse.org Home. http://www.eclipse.org/. 2008

[5]  WATIR. http://wtr.rubyforge.org/.  2008.

4 thoughts on “A Data-Driven Automated Test Design Using Watir and Rasta

  1. Pingback: From Manager to Developer – A Necessary Diversion » yowsbrain.left

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>