Tag Archives: quality assurance

Software Testing at TrustBearer

Hello, I’m Charles and I’m the new Quality Engineer at TrustBearer.  TrustBearer brought me on keep the company on track to meet its quality goals. Some of these goals were already being implemented when I got here: unit tests, code reviews, good defect tracking, code documentation. For my part, I tend to focus on system level testing, including functional and non-functional testing (security, performance, etc.), as well as developing a more solid and repeatable testing process. With this in mind, I’ll going to discuss the testing and quality assurance work we do at TrustBearer to ensure that our products work well and are secure. I’ll also focus on some of my personal philosophy with regards to testing.

My philosophy boils down to the idea that no program can be fully tested, and that a tester, or testing team, should focus on the ROI for their time. A lot of this philosophy has been developed from discussions with and readings from other professionals in the field such as Cem Kaner, James Bach, and Michael Kelly, including the idea of Context-Driven Testing. Some of my ideas also come from one or both of the organizations I belong to, the Association for Software Testing (AST) and the Indianapolis Workshops on Software Testing (IWST).

Now, how does this philosophy apply to TrustBearer products? Well, if we look at our website at the TrustBearer Desktop page, we list our compatibility for various smart cards, OSes, and mention other technologies we are compatible with. Just looking at the page tells me that there is a good number of combinations of OSes, browsers, smart cards, and mail programs that need to be tested, and that ignores any external factors (other USB devices that are used by the customer causing problems with our system, for an example).

So if it’s impractical to test everything all of the time, what is tested? There is no one correct answer, however, a good tactic to take focuses on defining the problem space. For me, this involves finding the typical configurations first. When I say a typical configuration for TrustBearer products, I mean this in regards to what our software interacts with. We can never fully replicate what our customers will have in terms of hardware and software, but we can have a reasonable approximation. For instance, if most of our customers are using PIV cards with Windows 7 as their OS, Firefox 3.5.6 as their web browser, and Outlook 2007 as their mail program, then my tests are run primarily using that as a base configuration.

Another good way to focus what is tested is to look at what features are used the most, and in what ways. For a good example of this, our software products facilitate the use of hardware and software tokens for things like windows logon, email signing and encryption, signing Word and PDF documents, as well as interacting with various websites. While we test all of these features, initial testing would focus on what our customers typically used our software for most.

Another generally good method of testing is focusing on high-risk areas. For some applications, this might be the billing system, or the login system, neither of which you want to find any serious defects in). For TrustBearer, this focus tends to fall on security testing. For instance, sometimes we develop web pages for customers that work with our TrustBearer Live Plugin, and we run tests simulating SQL Injection, phishing, and man-in-the-middle attacks to make sure that our customer’s data can’t be exposed to anyone untrustworthy.

Using these techniques as a base, we go on to test more and more features and platform combinations. The goal being that we have confidence that any bugs we have not found are minor, obscure, and will not cause problems for our customers. As with everything else in software, this is never a finished process, but with a good philosophy and a dedicated team, quality improves with every revision.

While that isn’t all there is to testing, I hope that the above gives you a little glimpse into the process, and hopefully I’ll be able to share more of the process of testing, tools we use, and how we determine when we’re ‘finished’.

– Charles

Advertisements