Software Testing at TrustBearer

Hello, I’m Charles and I’m the new Quality Engineer at TrustBearer.  TrustBearer brought me on keep the company on track to meet its quality goals. Some of these goals were already being implemented when I got here: unit tests, code reviews, good defect tracking, code documentation. For my part, I tend to focus on system level testing, including functional and non-functional testing (security, performance, etc.), as well as developing a more solid and repeatable testing process. With this in mind, I’ll going to discuss the testing and quality assurance work we do at TrustBearer to ensure that our products work well and are secure. I’ll also focus on some of my personal philosophy with regards to testing.

My philosophy boils down to the idea that no program can be fully tested, and that a tester, or testing team, should focus on the ROI for their time. A lot of this philosophy has been developed from discussions with and readings from other professionals in the field such as Cem Kaner, James Bach, and Michael Kelly, including the idea of Context-Driven Testing. Some of my ideas also come from one or both of the organizations I belong to, the Association for Software Testing (AST) and the Indianapolis Workshops on Software Testing (IWST).

Now, how does this philosophy apply to TrustBearer products? Well, if we look at our website at the TrustBearer Desktop page, we list our compatibility for various smart cards, OSes, and mention other technologies we are compatible with. Just looking at the page tells me that there is a good number of combinations of OSes, browsers, smart cards, and mail programs that need to be tested, and that ignores any external factors (other USB devices that are used by the customer causing problems with our system, for an example).

So if it’s impractical to test everything all of the time, what is tested? There is no one correct answer, however, a good tactic to take focuses on defining the problem space. For me, this involves finding the typical configurations first. When I say a typical configuration for TrustBearer products, I mean this in regards to what our software interacts with. We can never fully replicate what our customers will have in terms of hardware and software, but we can have a reasonable approximation. For instance, if most of our customers are using PIV cards with Windows 7 as their OS, Firefox 3.5.6 as their web browser, and Outlook 2007 as their mail program, then my tests are run primarily using that as a base configuration.

Another good way to focus what is tested is to look at what features are used the most, and in what ways. For a good example of this, our software products facilitate the use of hardware and software tokens for things like windows logon, email signing and encryption, signing Word and PDF documents, as well as interacting with various websites. While we test all of these features, initial testing would focus on what our customers typically used our software for most.

Another generally good method of testing is focusing on high-risk areas. For some applications, this might be the billing system, or the login system, neither of which you want to find any serious defects in). For TrustBearer, this focus tends to fall on security testing. For instance, sometimes we develop web pages for customers that work with our TrustBearer Live Plugin, and we run tests simulating SQL Injection, phishing, and man-in-the-middle attacks to make sure that our customer’s data can’t be exposed to anyone untrustworthy.

Using these techniques as a base, we go on to test more and more features and platform combinations. The goal being that we have confidence that any bugs we have not found are minor, obscure, and will not cause problems for our customers. As with everything else in software, this is never a finished process, but with a good philosophy and a dedicated team, quality improves with every revision.

While that isn’t all there is to testing, I hope that the above gives you a little glimpse into the process, and hopefully I’ll be able to share more of the process of testing, tools we use, and how we determine when we’re ‘finished’.

- Charles

About these ads

5 responses to “Software Testing at TrustBearer

  1. Great article. My QA team runs into the same issues testing our web based consumer application. It just isn’t feasible to expect to cover all supported OS & browser combinations (stupid IE6).

    “As with everything else in software, this is never a finished process…”

    Completely agree! My QA team deals with this conundrum through something they call Focus Iterative Testing. So in addition to testing the most used features on the most common configurations, they focus on what has changed from release to release. Not sure if you are familiar with this methodology but it has helped our short staffed QA team get the best possible ROI on their time.

    • Russ, thank you for the comment. I went ahead and googled your method and I think I have heard of it (the FIT acronym sounds very familiar).

      As for my own testing, I also tend to focus on changed/added/removed content from release to release.

      Judging from the first link I found detailing the method (here, by the way: http://www.priorartdatabase.com/IPCOM/000126438/) , it sounds like this has a heavy focus on automation. Considering you’re using web-based applications, would you happen to be using Watir or Selenium? If not, might I ask what you recommend? I’m always interested in finding new tools.

      -Charles

  2. Hi,

    This Software Testing artical is very useful for me. I would like to introduce another good Software Testing

    blog which is having free ebooks and technical content, Have a look.

    http://qualitypoint.blogspot.com/2009/12/released-two-ebooks-for-learning.html

  3. Chris,

    Your system testing strategy is sound and makes sense, however you are the Quality Engineer and as such I presume your focus would be on all aspects of the development and testing process.

    The good news is you have good things in place – unit tests, code reviews, good defect tracking, code documentation.

    I would suggest you also focus on improving the code review, unit test, defect tracking and documentation processes. The more defects you can uncover in these areas, before system testing, the more efficient your system testing will be, thus making your ROI better also.

    • Well, assuming you meant Charles, I’ll reply. :)

      Thank you for the general praise, it’s always good for me personally to hear from other testers that my thinking is sound (nothing like peer review!).

      I do agree with you in that I plan on moving to, if not take over those processes, be more active in them. The downside is, there’s only so much time in the day, and my personal experience has been focused on system testing.

      In addition to the above, we have a few developers here who are very focused on quality, and have made it a point to focus on those things in their professions. So at the moment, I defer to their experience, while I absorb knowledge from them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s