Internationalization in TrustBearer Products


Here at TrustBearer, we’re working on reworking the underlying pieces of our software to provide services which are required for full treatment of internationalization issues from cultural sensitivity to language translation.  Fitting such an infrastructure onto a vast existing system of hardcoded, interlinked modules which span multiple (programming) languages and techniques on several platforms is a challenge, but we have come a long way to eventually realizing the goal of true international support.

Issues of Internationalization

The most obvious issue of internationalization is the language in which the user interface is presented; if your audience only speaks French, or perhaps just as importantly prefers to speak French, then you need to offer a user interface in French.  Secondary issues can carry much more subconscious weight, however: as individual cultures have risen and separated from one-another across the planet, even in the age of international commerce and communication that the Internet conveys, specific communities still carry important symbols, ideas, and values which are intrinsically and uniquely their own.

The meaning of colors varies from one society to another, as does the importance of iconography.  For example, the dagger (†) bears similarity to the Christian cross (which it should, since that was part of the original meaning conveyed in its use) and should be used carefully, even in purely typographical meaning, to avoid unintentional religious connotations. This sort of care is often given nary a second thought by those responsible for internationalization strategy, and it’s important to address it just as evenly as one addresses language.  On the subject of translation, there are many things which carry over along with the baggage of verbal communication: formality, dialect, the hints and subtleties that individual words carry both inside and outside of specific contexts, all of these come into play with deciding on a technique for providing the infrastructure with which to provide international support for a software system.


There are four fundamental systems in TrustBearer’s overall product ecosystem: the web plugin with dynamic device support, the browser, static web content, and the server.  We have chosen to implement three components which satisfy the translation needs of all four pieces.

Plugin, Device Support, and JavaScript

We implement dynamic translation features in the plugin as a module, which is loaded at run-time just like device support is.  In this way we offer the ability to switch languages in real-time.  It also gives us the luxury to download translations from a server where they can be modified on the fly without the need for any kind of recompilation or redistribution.  Because our plugin and the methods of its modules, which have been marked with the appropriate access level, are accessible from JavaScript running in the browser, we can extend such dynamic translation support to the browser as well.

Translations are conducted using a very simple macro language inspired syntactically by TeX.  It allows placement of separately translated arguments in arbitrary order (to facilitate support of languages such as German which have different or flexible word order structures) and automatic construction of quotation marks (to handle nested quotations build from separate translations substituted as in the previous remark).  Because of the nature of the system, new extensions can be added but this will not complicate the writing of a translation module: the goal is to allow our customers to be able to modify these to substitute their own language if they see fit.

Static Web Content

The exact same translation module drives a compile-time tool to create static versions of web pages, style sheets, and so forth which support individual cultures.  This step occurs automatically using pkgBuilder, the tool used throughout TrustBearer for building all systems we produce.  The result is a collection of HTML files, one for each culture which is supported.  Customers are then free to select from the available languages to deploy, and to use their webserver’s rewriting capabilities to send e.g. their Norwegian users to rather than index.en.html.  This provides a balance between computation time required and flexibility offered; while TrustBearer’s clients no longer have the capability to retranslate pages without any extra work, their servers will not be constantly working to provide the latest translated version when such is very unlikely to change with any notable frequency.


The core TrustBearer server, which we simply call “the daemon,” rarely provides information directly to the user.  Rather, it contains a lookup system to check incoming error attributes against a database and return a corresponding message which is meaningful to the humans who will diagnose or work around problems.  For quite some time, the daemon has used the client’s language (as provided by their browser) as a means of looking up these human-readable messages.  Little to no extension of this system is necessary to continue to provide error and status messages to users in the languages in which they are accustomed to communicating.

The Road Ahead

While the architecture of the internationalization system is fairly set, the work on retrofitting this into the existing system is just beginning.  Over the next few months, we will be externalizing strings and incorporating the new methods of translating messages dynamically.  TrustBearer’s head engineer, Eirik Herskedal, is from Norway and will be providing a pilot translation into his native tongue for us to begin testing with.  The testing and quality assurance process will take the longest, but features in the translation system (for example, XXX’ing out strings for which no translation exists) will make this process smoother.  After several months, we expect these features to enter beta use by our most prestigious customers. Then, with the consideration of customer feedback,  these features will be added to all of our future deployments.

Software systems that work with the cultural complexities and sensitivities of their users, rather than against them, are woefully under supplied.  At TrustBearer, we value the differences in our users, and wish to better cater to the multi-faceted nature of their individual societies.  By starting this long work to implement internationalization features in TrustBearer products, we begin down the road of providing our system in a way that works best for you, whoever and wherever you may be.

Bureaucrats with Badges

There was a peculiar piece in the American Spectator online last week, a “Special Report” by Mark Hyman. The author lists a number of unfortunate circumstances by which harmless passengers, many times military personnel, have been delayed or hassled by TSA and airport security protocols. He blames these anecdotal mishaps on “government bureaucrats armed with ‘rules, policies and procedures’ and employing no commonsense.”

He goes on to question a number of security and procedural policies in government and military institutions, which he thinks are unnecessary and demeaning to the personnel at these institutions. As a primary example, Hyman makes the case that the rules for issuing and renewing CACs (Common Access Cards) are unneeded and absurd.

He is miffed because he did not renew his CAC before it expired and he had to go though a bureaucratic process to straighten this out:

“My CAC had expired days earlier so I contacted an issuing office to get a replacement. A clerk in the ID card office informed me that all appointments had to be made online using the intranet. Yet, my expired CAC prevented me from using the intranet system. In spite of my predicament the clerk told me, “Our policy requires all appointments to be scheduled online. If you are unable to use the intranet, then there is nothing more I can do.” It sounded like the beginning of an Abbott and Costello routine.”

“Rather than fight this particular battle, I decided to renew my CAC at another issuing office. While there, I was asked to produce a picture ID. I showed my state driver’s license. I was then asked for a second form of ID and was told the CAC was not acceptable since it expired five days earlier. A week earlier it would have been valid, but on this day it was deemed worthless. So I showed the clerk my company-issued ID card that looked as though it was made on an office computer and laminated at the local Kinko’s. As a matter of fact, that was exactly how that ID was manufactured. But it was good enough. The clerk accepted the flimsy company ID over the just-expired military CAC.”

Hyman concludes,

“What makes this episode even sadder is that the military CAC is generally not accepted as a valid form of identification for use by visitors to the Pentagon. Visitors must also have a Pentagon-issued ID or another form of identification such as a state driver’s license. The reason, according to a security officer, is that at least one machine that manufactures CACs and several hundred blank CACs are missing and presumed to have been stolen. Security officials do not know which CAC is valid and which is a forgery.”

The latter claim is nonsensical and shows that the security officials Hyman chats with are miss informing him about how his CAC works. This too, expresses a common misconception— that possession of the card is the only thing that verifies identity.

To his point about the pains of standing in line to renew something only to find that you don’t have the right materials: I can empathize with this, but I cannot gather what rules Hyman thinks are silly, and which are reasonable. Is he arguing that he shouldn’t have to have a CAC, or that he should be able to use his expired CAC, by itself, for renewal? And what does this have to do with policy created by top-level military and government officials?

What is clear from reading the piece is that he doesn’t like the rules much because he doesn’t understand why they are in place. He wanted an exception so he could use his expired CAC. Similarly, in another of his examples, he complains that his wife couldn’t renew her own CAC using an expired passport.

There are two fundamental questions that would help Hyman better appreciate these rules: Why are identification badges, such as CAC cards, used? And, how is the true identity of a badge-holder verified? In other words, what is a CAC good for anyways?

The military provides several resources for answering these questions. In fact, had Hyman consulted these, or unofficial resources, anytime before his CAC expired he would have had less of a hassle renewing it.

Identity, and the privileges we associate with it, is an abstract thing that is difficult to verify. The best way for a large institutions to verify a person’s identity is to gather the various artifacts of identity, such as a state driver’s license, for this person and grade the validity of these items and the authority of the institution who gave the item.  The bureaucratic pronouncements on this process (i.e. presidential directives and policies) say that the best way to verify the identity and authorization of millions of people is to create a system of rules that make the procedures repeatable, reliable, and safe. (One such rule may reason that an expired identity artifact should not be considered valid, even if it was valid yesterday.)

Now, the process of using a CAC card is not as simple as it could be. Systems that use badges for the identification of people and the verification of people’s permissions and authority are complex and imperfect, but this is not a problem of bureaucracy. It’s more a matter of improving these systems for most users and reminding users, like Hyman, why they were given a badge to begin with.

Interview with Eugene Spafford

TrustBearer is located in Indiana and—as it might be expected— several members of the company, including the company’s founder, are graduates of Purdue University, in Lafayette, IN. A couple of TrustBearer’s Purdue alums studied at the Center for Education and Research in Information Assurance and Security (CERIAS), under the direction of Gene Spafford.

Gene Spafford is a well-known programmer, researcher, and educator in the field of computer and information security. He is perhaps best known for his analysis of the first internet-distributed worm in 1988, the Morris worm. He also has a knack for punch security metaphors.

Spafford was recently interviewed by Tom Field of the Information Security Media Group. It’s a thoughtful, thorough interview, which is well worth sharing. The subject of the interview concerns information assurance education. Spafford was asked about the current state of information assurance:

SPAFFORD: Well, it is still rather chaotic. There are a range of issues and priorities within the field where education can be directed; some of the education is directed towards people who are practitioners, who are going to be on the front lines running systems. Some are oriented towards management-type positions that are setting policies and ensuring compliance. And there still is a community focused on the research aspects, more how to solve problems that are just emerging.

We don’t really have a common curriculum that runs across these, although there are a couple efforts that are underway to try define parts of it, and it is isn’t really certain what the best practices are, what the background expertise should be for these positions. So it is still an area that is evolving quite rapidly.

In the interview, Spafford also talks about how information assurance and security have changed over the past couple of decades:

When I really saw the start of this field in the late 80’s and early 90’s, most of the people who were involved had a deep understanding of issues of machine architecture, encoding, network protocols, and really understood the systems at a low-level. What we see now for many educational institutions is they are focusing on high-level applications, web security, JAVA, running prepackaged firewalls and IDS systems, and many of the people going to that educational path are not exposed to those low-level details, even though some of the attackers are exploiting those low-level details. So we have seen a split off of that kind of expertise in two areas, both the research arena and also some in the forensics arena.

But of course the field has also grown; the level of threat has changed significantly. If we go from the late 80’s/early 90’s, there wasn’t any commercial use of the Internet, and it didn’t have the global reach it does now. So the issues of social engineering, fraud, phishing, many of the other kinds of false information presentation and mailed-around exploits didn’t exist back then. So we have seen a huge evolution in the threat picture, in the target set, and in the overall understanding of what security in computing is all about.

Related Links

Spafford Interview on C-SPAN An expansive (30 min) general interest interview with Spafford on the state of internet security and identity protection.

Do We Need a New Internet? New York Times piece on the future of internet security, including a discussion of Spafford’s work.

Healthcare PKI in Denmark

In this post, I muse on Denmark’s implementation of a country-wide system for secure, up-to-date sharing of EMRs and patient identity federation. But I primarily want to share a links  for those interested in what they are doing:

A Cute Introduction

Last week Barack Obama visited Copenhagen to support his home city’s bid to have the 2016 Olympics hosted in Chicago. Later this year the U.S. President will meet with international leaders in Copenhagen for a UN summit, negotiating the successor to the Kyoto protocol.

In U.S. political news, the international happenings in Denmark have offered a nice break from the ongoing, rancorous national debate over reforming the U.S. health care system. Political events have stirred a broader conversation about the overall state of American health care, such as the cost and effectiveness of the current system. In a moment of free association, the events in Denmark reminded me of some interesting things about that nation’s health care system: the Danes are rather progressive—no, not because they’ve socialized, I’ll entirely leave this matter aside—in regards to they’re health care IT infrastructure.

What is Denmark Doing?

Denmark’s system is interesting so I’ll share what I’ve learned of the nation’s overall approach to health care IT and, in greater detail, discuss their implementation of PKI.

There are many Danish organizations involved with the reform of health care IT. Foremost are MedCom, the Danish Centre for Health Telematics, who is the coordinating organization for health care in Denmark and manager of the Danish Health Data Network; the National Board of Health for Denmark, who developed the data model and terminology server for the system, and leads the country’s overall health IT stragegy; and the Ministry of Science, Technology and Innovation (MTVU) in Denmark, who develops most of Denmark’s technical standards and recommended a standard for Service-Oriented Architecture (SOA) identity federation to be used in various Danish systems.

The National Board of Health’s stated goal for the reformed system was “to provide a connected health care sector in which health professionals have access to all relevant EHR data regardless of where citizens seek treatment and no matter where or when this information was registered.” Lofty, indeed 1. Unlike most countries, though, Denmark has robust broadband access in most of the country. And most general practices and hospitals already use electronic medical records (EMRs). The National Board of Health knew it would need to implement a nationwide SOA for the secure web sharing of data.

Implementation of PKI

Denmark built it’s PKI on top of it’s existing virtual private network (VPN) architecture, which is made available to all health care providers in the country, and it was already in use by many for remote collaboration. At the behest of  MVTU,  SAML was selected as the framework for identity federation and the exchange of authentication assertions. Health care professionals are issued DanID, a X.509 certificate from the Danish OECS CA. The following step explain how authentication is performed between Danish health systems 2:

  1. User authenticates as part of login to local EHR system and a digitally signed, SAML assertion is created.
    – this is a SAML security token, referred to as a virtual health professional identity card.
  2. A direct request is made to a central security token service (STS), which checks the validity of the local system’s digital signature, the user’s signature, certificate validity and revocation status, and core certificate attributes3.
  3. STS signs the SAML token and sends a response to the local system.
  4. The SAML security token can be used until it expires (after 24 hours).

Denmark PKI

I’m not sure what plans Denmark has for the authentication of everyday citizens to health care services and portals4. The foundations are certainly in place. The infrastructure for the clinical exchange of medical records, which utilizes the Danish Central Person Registry (number), provides a unique identifier for all national patients. is a public portal for Danish citizens where patients can access (some) of their health information, receive online consultation, schedule health services, and renew prescriptions/treatments. While Denmark does not issue electronic ID cards, each citizen is given a digital certificate, which is automatically derived from that citizen’s CPR number. With a combination of these parts, each Danish citizen could use their digital certificate for authentication to and for the signing of health documents.

Lesson from Denmark’s System?

What can be learned from Denmark? Well, one could try to point out the things Denmark has done right, as Gartner did in their study, which will be either unmissable or made up: Denmark used a “[g]radual approach with realistic time frames”; they gave “Incentives to vendors”; they used a “project-based approach”; they “[kept] an appropriate balance between central coordination and local leadership.”; the country has a “culture of consensus”.

As all observers have pointed out, its too early to tell what improvements the reformed IT changes have made. What Denmark seems to have done right is to start with a basic, but sound architecture that makes use of existing infrastructure and technologies. They have similarly, worked to make the systems simple, affordable, and feasible for all of the country’s health providers, using open standards and technologies.

Beyond the broader success of the program, I was interested to understand how adoption and use of the PKI has been. But, it seem too early to ascertain the problems with the reformed system or understand the parts of the systems that will need to be improved. From TrustBearer’s perspective, we are interested in problems experienced while deploying and using PKI,  issues such as interoperability between relying systems, certificate policies, certificate validation, and renewal, distinguishing between levels of identity assurance, and usability for end-users. I could not find much information in regard to these issues in the Danish system, so this will be a topic left for future blog posts. One thing of note was that developers involved in the Danish project found some things lacking in the the SAML/XML schema, because its was not possible to express certain types of requirements and policies as part of an authentication/authorization assertion5. (This is related, rather loosely, to a problem TrustBearer was trying to solve in another context, signifying the strength of an authentication method in the OpenID Provider Authentication Policy Extension.)

1. A Federation of Web Services for Danish Health Care

2. As outlined in A Federation of Web Services for Danish Health Care.

3. Exchange of tokens over SOAP.

4. There is a least one pilot of software-certificate-based PKI access for out patients.


Recap of the August 2009 Government Smart Card IAB Meeting

It’s been awhile. I’ve had a few posts queued up to write, and this was one of the first. I try to attend the IAB meetings when possible, but this past August meeting was the first that I’ve been to since March. As most people in our niche identity+smart card government industry know, these meetings are a good opportunity to catch-up with colleagues and hear updates about progress at various agencies. I think all future IAB meetings should be hosted at the American Institute of Architect’s second floor conference room. The concentric circle seating layout with desks is excellent.

GSC IAB bannerSlides and audio from the afternoon’s presentations are available from FIPS (Thanks, Avisian)

This August’s meeting kicked off with a wholehearted update from USDA’s Owen Unangst. Owen started with a historical overview of USDA’s Identity and Access Management Vision. I liked his inverted pyramid diagram that outlined this vision. Everything begins at the base with Identity. Identity has existed long before HSPD-12 was announced. HSPD-12 namely addressed the second layer, Credentials, with the PIV specification. USDA has been busy implementing the Accounts, Authorization, and Access Control layers atop credentials. It’s impressive that they’ve made progress with both logical and physical access systems, and tUSDA IAM Vision Pyrmaidhat these two historically disparate systems appear to be tightly integrated. I also found it interesting that the topmost layer, Application Integration, is only in the earliest planning stages. Implementation is far from complete.

My biggest take-away from Owen’s presentation was the method and time that went into each of their strategic planning cycles. While a lot of this is classic PMP stuff, what was interesting is that Owen said one of these cycles (Business Reqs Analysis, Gap Analysis, Portfolio Selection, and Portfolio Assessment) should never take more than 3 months. The output of a cycle is a biz case, roadmap, architecture and project plan. Three months should be more than enough time to make a decision and layout a plan to execute it. In my mind, this is why USDA will be successful in implementing their Identity and Access Management vision.

Tim Baldridge of NASA followed Owen’s talk. Tim gave a brief presentation about how a single PIV card will eventually be trusted across multiple domains and agencies, not just by the agency that issued the card. Today, some individuals are being issued cards from each domain to which they need access. Tim uses an example of a doctor from HHS who has both an HHS and DoD PIV card. Federal PKI Trust Anchors such as the Common Policy (Federal Root CA) and Federal Bridge CA (FBCA) will provide the technical infrastructure that will enable this to happen. There is still quite a bit of work to be done here, but it’s good to see that it is on the radar of leaders in the HSPD-12 space. By the October IAB meeting, Tim hopes to be able to demonstrate certificate interoperability in person. I’m looking forward to seeing this demonstration.

Bill MacGregor, an IAB regular from NIST, gave status updates on a few projects and publications. The NIST SP 800-73-3 draft is open for public comment (Actually, looks like comments were due on 13 September). This isn’t a huge update to 800-73. Looks like some good things for PIV-Interoperable cards (UUID definition consistent with NFI spec). Also, some card lifecycle stuff, “on-card retention of historical keys”. The important news is that this update should have no impact on already-issued cards.

NIST IR 7611 24727 piv stackNIST also has released an Interagency Report on the Use of ISO/IEC 24727 (NIST IR 7611). ISO/IEC 24727 helps desktop applications discover and talk to the growing number of types of identity smart cards. In the introduction of NIST IR 7611, the Transportation Worker Identity Credential is noted as similar but technically different than a PIV smart card. “The ISO/IEC 24727 framework allows any client-application to communicate with any card-application.” The document describes the structure of the NIST-developed 24727 middleware stack that was developed to communicate with PIV cards. Much of this work was based on the exisiting NIST PIV middleware demo. If possible, the source code will be released. What does “If possible” mean, Bill? Interesting stuff for folks like us.

Bill then offered a reminder in light of some recent press about RFID skimming – “Are they talking about PIV?” Probably not. Not all RFID skimming is the same – not everything can be skimmed. He suggests re-reading SP 800-116, Secton 4, Appedix A to help decided the right authentication assurance level. e.g. If you’re just reading the CHUID from a card, you don’t have a very high assurance that the card was not copied. Verify the CHUID signature, or even better, perform a mutual challenge-response with the card to have a much higher assurance that it was not copied.

Chris Lounden finished the day by giving an update on what GSA’s ICAM group has been up to. He spoke about the ICAM’s goals of making Government more transparent to citizens by making it easier to access government websites and leveraging various Web 2.0 technologies. Chris made it very clear that the ICAM’s current focus is on portable identity for non-PKI, OMB Level of Assurance 3 and below. The concepts introduced are not attempting to redefine or replace what PIV or CAC provide. When implemented correctly, applications can reach a Level of Assurance 4 with PIV or CAC.

ICAM’s approach to allowing portable citizen identities in the federal government is to “Adopt technologies in use by industry (Scheme Adoption)” and “Adopt industry Trust Models (Trust Framework Adoption)”. To assist in Scheme Adoption, ICAM is developing Identity Scheme profiles for OpenID, Information Card, and SAML (WS-Federation is to follow). To assist in Trust Framework Adoption, ICAM has published the “Federal ICAM Trust Framework Provider Adoption Process” on The big deal here is that participation is expected from InCommon, the OpenID Foundation, Information Card Foundation and Liberty Alliance / Kantara.

This is old news now, but ICAM and the participating bodies menitoned above made a big splash about this earlier in the month at the Gov 2.0 conference. I’ll be sharing my thoughts on Open Identity for Open Governement in an upcoming blog post. As mentioned at the beginning of the post, all of the presentations (from which I’ve heavily referenced) and audio recordings (hot) from the IAB meeting are available on FIPS I’ll look forward to seeing more familiar faces next month at the in-person IAB meeting at the Smart Card Alliance’s Smart Cards in Government conference (Oct. 27-30).

Zittrain on Privacy and Security in the Chrome OS

What happens to  consumers’ privacy and security when the web is their operating system?

In the New York Times this week, Jonathan Zittrain, professor of Internet Law at Harvard Law School, co-director of the effort, and author of The Future of the Internet— and How to Stop It, offers a forward-looking, non-technical review of the Google Chrome OS, which was officially announced last week.

Many people consider this development to be as sensible and inevitable as the move from answering machines to voicemail. With your stuff in the cloud, it’s not a catastrophe to lose your laptop, any more than losing your glasses would permanently destroy your vision. In addition, as more and more of our information is gathered from and shared with others — through Facebook, MySpace or Twitter — having it all online can make a lot of sense.

The cloud, however, comes with real dangers.

Zittrain's 2008 book about the transformation of PCs to portable, web appliances.

Zittrain's 2008 book about the transformation of PCs to portable, web appliances.

The danger Zittrain foresees is manifold. As he expressed in The Future of the Internet, Zittrain worries that we-as-users will hastily adopt portable, ‘connected’ computers, like Apple’s iPhone, potentially forgoing much of the software and services offered by today’s Internet.

To further facilitate glitch-free operation, devices are built to allow no one but the vendor to change them. Users are also now able to ask for the appliancization of their own PCs, in the process forfeiting the ability to easily install new code themselves. In a development reminiscent of the old days of AOL and CompuServe, it is increasingly possible to use a PC as a mere dumb terminal to access Web sites with interactivity but with little room for tinkering. (“Web 2.0” is a new buzzword that celebrates this migration of applications traditionally found on the PC onto the Internet. Confusingly, the term also refers to the separate phenomenon of increased user-generated content and indices on the Web—such as relying on user-provided tags to label photographs.) New information appliances that are tethered to their makers, including PCs and Web sites refashioned in this mold, are tempting solutions for frustrated consumers and businesses.

But we have to expect that the Chrome OS will a fundamentally open system, allowing user’s to install any software and get pretty much anywhere on the web. The danger then, in Zittrain’s view, with the Chrome OS, is more an issue with the current state internet: The Internet was not designed with privacy and security in mind:

Some [dangers] are in plain view. If you entrust your data to others, they can let you down or outright betray you. For example, if your favorite music is rented or authorized from an online subscription service rather than freely in your custody as a compact disc or an MP3 file on your hard drive, you can lose your music if you fall behind on your payments — or if the vendor goes bankrupt or loses interest in the service. Last week Amazon apparently conveyed a publisher’s change-of-heart to owners of its Kindle e-book reader: some purchasers of Orwell’s “1984” found it removed from their devices, with nothing to show for their purchase other than a refund. (Orwell would be amused.)

Worse, data stored online has less privacy protection both in practice and under the law. A hacker recently guessed the password to the personal e-mail account of a Twitter employee, and was thus able to extract the employee’s Google password. That in turn compromised a trove of Twitter’s corporate documents stored too conveniently in the cloud. Before, the bad guys usually needed to get their hands on people’s computers to see their secrets; in today’s cloud all you need is a password.

Thanks in part to the Patriot Act, the federal government has been able to demand some details of your online activities from service providers — and not to tell you about it. There have been thousands of such requests lodged since the law was passed, and the F.B.I.’s own audits have shown that there can be plenty of overreach — perhaps wholly inadvertent — in requests like these.

Now, Zittrain points out that consumer laws can regulate many of these sort of problems. But he’s arguing the gate-keepers of the net (i.e. Mircrosoft, Amazon, Google), will improve security and privacy for only select applications, leaving the rest of the web in the dust.

Emerging Technology Presentation from CTST

Earlier this week I gave a presentation on an Emerging Technology panel at Card Tech Secure Tech (CTST) in New Orleans. Much of the content was taken from the Virginia Security Summit presentation given a week prior, but I elaborated on using smart cards for strong authentication. A couple of the slides got into using digital certificates to prove someone’s “real” identity to a relying party using OpenID extensions and digital certificate path discovery & validation.

Where do we go from here? I would like to see some of the identity verification concepts that I touched on in the presentation be tested in a pilot. There are also opportunities here to evolve the OpenID specs and extensions, such as PAPE. TrustBearer would like to continue this discussion and explore some pilot ideas. Contact me if you are interested.