Exploring Certificate Transparency – Part 2

[Update 2/23/2014: The previous version of CTPyClient monitor.py was not accurately parsing subjectAltName extension data from retrieved certificates. This has been fixed in the latest commit.]

Since my last post, I’ve finished up the first version of my CTPyClient utilities. My primary focus was to first understand how to answer the basic question: how can I determine if a certificate has been issued by a CA other than the one that I purchased my certificate from? Recall from past CA mis-issuance incidents (ComodoHacker, Diginotar), the goal of CT is to provide a means for site owners to determine if a CA has accidentally or as a result of a compromise, issued an unauthorized certificate for their domains.

The /ct/v1/get-entries log client method is used to fetch log entries from a CT log server. Each log entry has 2 parts: a MerkleTreeLeaf structure and an extra_data structure which contains a certificate chain. The MerkleTreeLeaf structure is a composite data structure that contains a timestamp which indicates when the certificate was submitted to the log server as well as a copy of the end entity certificate that was submitted. The structure is encoded as a Base64 encoded byte string. Within this byte string, the actual certificate is encoded in DER format, using the same structure as required by the TLS specification (RFC 5246). The extra_data structure consists of the certificates that were used to issue the end entity certificate with the exception of the root certificate. These certificates are also encoded using the format specified by TLS.

The get-entries method does not provide ‘request/response’ mechanism to determine if a given certificate is present in the log. Rather, the get-entries method expects the submitter to specify a range of log entries to fetch. It is the job of the submitter to parse the results of get-entries to look for certificates of interest. This design decision makes sense, since the log server is designed to be just that, a log. The protocol is designed for efficient data retrieval, not for supporting queries. It is expected that as CT’s use grows, other interested parties will build out databases that contain issued certificates, using one or more CT log servers as input.

In order to specify a range of log entries to request, you first have to ask the log server for the number of log entries that are present. This can be accomplished by using the /ct/v1/get-sth method. This method will return the number of log entries present in the log server since the last time that a signed tree head (STH) was generated. You can then specify the range of log entries to retrieve based on an offset from this number.

The monitor.py utility that is part of CTPyClient implements this flow. Feel free to download the utility and kick the tires!

Next time, I’ll explore some of the other CT log client functions and discuss how they can be used to further detect missuance events.

Exploring Certificate Transparency

Greetings to Securism readers and apologies for the delay in posting fresh content. As usual, my annual goal is to increase the frequency of posts to Securism.

As I had mentioned in my previous entry on Certificate Transparency, I’m a big fan of this approach for validating that SSL certificates are not being misissued by potentially compromised or unethical public Certificate Authorities. Since that last post, RFC 6962 formally defining Certificate Transparency has been issued and several log servers are currently in operation. Google recently announced a new log server that was brought online recently (https://groups.google.com/forum/#!topic/certificate-transparency/I9czVN5LWps).

To explore CT in a little more detail, I’ve started working on a few Python client utilities to query log servers to locate certificates that have been issued for a particular domain; in my case I administer https://vivaciousmezzo.com, so I want to ensure that only the CA that issued my certificate (Startcom SSL) is in the CT log. I expect that this type of check will soon become an important part of maintaining publicly trusted HTTPS websites in the future.

To follow along with my CT work, feel free to fork my Github repo.

Integrating two-factor authentication with existing directory services

When I’m considering how to best deploy a two factor authentication solution in a client’s network, one key question that is frequently not carefully considered is whether or not the solution should be integrated with an existing identity store such as an LDAP or Active Directory instance or if the solution should be deployed independently using its own dedicated identity store such as an internal database or standalone directory server.

Generally speaking, when you re-use an existing identity store that is well maintained, your new two factor solution can take advantage of the existing processes that are in place to manage user identities. For example, if a user account in Active Directory is disabled or deleted, the two factor solution should be able to detect this and prevent the terminated user account from successfully authenticating. Similarly, when a new user is to be assigned a two factor authentication token, if the user account is already provisioned in Active Directory, then there is no need to create a separate identity in the two factor authentication system. When using outsourced providers for your two factor authentication solution, if the providers use your internal identity store, you retain an additional level of control in the event that you need to disable a user’s two factor authentication.

However, there are other factors that should be considered when integrating with an existing identity store. First, the overall security of the user accounts in the two factor solution are essentially reduced to the security of the identity store; i.e. if an attacker compromises AD and adds new user accounts, it may be possible for the attacker to request and successfully obtain new tokens. Second, account provisioning workflows are now slightly more complicated; one possible scenario is that a user needs a token but does not have an AD account. Finally, the availability of the two factor solution is reduced to the availability of the identity store. This is usually not an issue with enterprise wide corporate directory servers, but is an issue that should be considered nonetheless.

 

All security folks should try the Matasano Crypto Challenges

So after seeing the HN article on the Matasano Crypto Challenges, I decided to tackle them for myself. These days I don’t write too much code as most of my client facing work is design and implementation. After spending a week and getting through just over half of the first problem set, I realized that spending some time focusing on cryptography basics at the bit level is incredibly valuable. Any developer and security pro knows that it’s almost always a mistake to roll your own crypto code, so we rely on popular crypto libraries instead like OpenSSL, BouncyCastle and the like. However, those libraries generally have very little safeguards to protect you from doing something incredibly stupid like choosing the wrong type of encryption algorithm for your application.

The Crypto challenges help point out how easy it is to break poorly implemented cryptography while also forcing you to consider how things work at the bit level. Even if you don’t write code as part of your day to day work, understanding the basics of how crypto code works is incredibly useful. Plus, as a bonus you can use your new skills to help in more practical day to day work, like writing analysis tools that dissect logs, packet captures etc.

I hope to see the number of folks working on the level 0 challenges grow much higher in the next few weeks!

QuickTip – pam_radius_auth proxy to SecurID

This blog post is a bit different than my typical entry, mainly because I simply need to record some troubleshooting steps I performed to get RADIUS authentication working for pam_radius_auth against a SecurID RADIUS server backend.

My basic sshd config file in /etc/pam.d was modified to add pam_radius_auth.so as a sufficient authentication method. But, by default the Access-Challenge messages generated by the RADIUS server were not being passed back to my ssh login screen and authentication was failing silently. Turns out that if you want to actually be able to view responses from the RADIUS server so you can do things like enter new PINs when your token is reset in new PIN mode, you need to modify your sshd_config to enable challenge-response authentication:

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication yes

Once this setting was enabled, I was able to view messages generated by the SecurID RADIUS server and was then able to complete the authentication sequence as expected.

Hope this helps someone else who ends up googling for the same problem!

 

Using a HSM doesn’t automatically make you more ‘secure’

Reading the recent disclosure by Bit9 and some of the subsequent comments on that blog reminded me of my experience using hardware security modules (HSM) for key storage. I discussed how HSMs work in one of my previous posts, but today I want to discuss how HSMs may or may not necessarily make your environment more secure.

HSMs are great at storing key material in such a way that direct exfiltration of key material from the HSM is, for all intents and purposes, completely impractical. All HSM vendors go through great lengths to make their devices fully FIPS certified, so direct attacks against the hardware, side channel attacks etc. are extremely unlikely to succeed. All HSM devices I’ve worked with implement ‘self destruct’ functionality as a requirement of FIPS 140-2 Level 3 compatibility which ensures that the HSM will zeroize before giving up key material.

To make key material within the HSM usable to applications, most HSMs ship with client libraries, typically implementing a PKCS#11 interface, that can be used to perform cryptographic operations like data encryption, digital signature creation and verification etc. This interface is protected, typically with a password to ensure that only authorized applications can establish connections to the HSM. In order to enable applications to access key material, HSMs typically enforce a process that requires the presence of multiple witnesses to ‘unlock’ the key material. Keys that are stored on HSMs that are unlocked and available to applications are generally referred to as ‘online’ keys, while keys that are still locked on the HSM and are not available to applications are referred to as ‘offline’ keys. Now you know what CAs mean when they say their root keys are stored ‘offline’! :)

However, just like any other security control, HSMs are laughingly ineffective if they are not operated in a secure fashion. First, it is generally true that offline keys that are stored in HSMs are inaccessible unless the HSM itself is somehow breached. However, online keys are accessible to any applications that have access to the credentials needed to authenticate to the HSM itself. In other words, practically speaking access to key material stored in HSMs is reduced to the level of security that is applied to storing credentials used by applications to access the HSM. After all, if you have the ability to request that the HSM digitally sign arbitrary data, such as malware or fraudulent certificate requests by compromising applications that have access to HSM credentials, why bother attacking the HSM itself?

If you truly want to ensure that your key material is stored in a secure fashion, HSMs can certainly help but you MUST be willing to invest in the operational procedures to use them securely, such as:

  • Configure all applications which use HSMs to require user input to enter HSM credentials when services are restarted as opposed to storing those credentials on disk.
  • Ensure that HSMs are configured to require strong authentication for client connections. Most HSMs I’ve worked with use mutual SSL authentication in addition to HSM specific credentials. Make sure that these credentials are adequately protected.
  • Segregate application servers with direct access to the HSM from other application servers in your network and carefully monitor them for unusual activity.
  • Monitor actions performed by the HSM on behalf of applications to detect abnormal activity, such as connection attempts from unauthorized hosts.
  • Implement 2FA for HSM administrative access and require the presence of witnesses to transition key material from ‘offline’ to ‘online’ states.
  • Regularly review audit logs from HSMs to confirm to look for unexpected actions such as key generation requests, backup/restore operations etc.

Finally, when thinking through the ramifications of operating a HSM securely, consider whether the risks that  the HSM mitigates are truly worth the cost. After all, if you design your application carefully and have strong processes in place to protect credentials used to access HSMs, could those same processes suffice for storing the key material directly? Your HSM budget could potentially be more wisely spent elsewhere.

MiTM or not?

Welcome back Securism readers! It’s been a few months since my last post; hopefully this year I’ll be able to keep up a better regular cadence of posts.

This morning, I saw a HN article that was provocatively titled ‘For sale: Trused root SSL CA signing certificate’! I wasn’t sure what this meant at first since my first reaction was that a CA is going out of business and is selling it’s keys! However, the real story is quite a bit less mundane. The story points out a CA that sells subordinate CA certificates to organizations, with the intent being that those organizations will issue certificates for their internal use.

Such a service offering is quite appealing to both the CA and the organization. For the CA, it offloads responsibility of managing high volume of certificate enrollment requests to the organization which operates the sub CA. For the organization that operates the sub CA, they can issue certificates within their organization that are trusted by all relying parties without having to manually add their own CA certificate to the trust stores of all relying party software / devices (this is far from a trivial problem to solve in my experience!)

So, what is the problem with this approach? The immediate issue that the HN crowd jumped on is MiTM. The fear is that such subordinate certificates could be used for MiTM purposes by the organization that operates the sub CA. To recap, the MiTM attacks that concern the HN folks is that the organization could use their sub CA certificate to issue a SSL certificate for a domain that they do not control, such as *.google.com or another high value domain.

However, RFC 5280 (and in fact the older RFC 3280) which specifies the format of X509v3 certificates includes a specification for an extension called Name Constraints. The Name Constraints extension can be used to specify a domain name pattern that can be used to specify which domains that a CA certificate issue certificates for. So theoretically, as long as the root CA includes a Name Constraint extension in the organization’s sub CA certificate, the organization could only issue MiTM certificates for domains that it owns.

The issue with the Name Constraint extension is that sadly it is not reliably checked by all browser software. Efforts have been made by both browser vendors and CAs to change this, but as of right now you cannot assume that 100% of all browsers will reliably use this extension.

Beyond the use of the extension, any root CA that offers such a program will of course impose tight audit standards on the organization operating a sub CA to ensure that certificates issued conform to requirements placed on the root CA.

So to revisit the original question, does this practice enable general MiTM attacks for arbitrary domains? I would argue that it does not since the use of Name Constraints clearly indicates which domains the certificate is to be trusted for. Even though Name Constraints is not universally supported by all browsers, it is supported by enough of the major browsers that a user with one of these browsers would eventually detect the MiTM attack. As Certificate Transparency and DANE become more ubiquitous, these concerns will become a non-issue. As with any other security control, offering a subordinate CA that is properly name constrained has trade-offs and risks that need to be balanced.

How I learned to stop worrying and love CT

Like any other security pro, I love a good technical fistfight; especially when both sides have perfectly valid points of view. Over the last few months, I’ve watched with great interest as the CA / Browser Forum has essentially fundamentally reformed itself, Mozilla work to add a somewhat controversial but hopefully effective mechanism to disclose subCAs chained to public roots, and the continued development of a radically different but very refreshing approach to certificate transparency.

I’m becoming more of a fan of CT as I see the approach discussed in IETF mail lists. Essentially, CT is a Google led initiative to develop a public, read-only log of every certificate that has been issued for a particular domain. The idea is that as a certificate is issued by a certificate authority or, (and this is the really cool part), when a certificate is installed on a public server by a domain name holder, a cryptographic proof of the certificate issuance event is logged to a public CT server. Relying parties (browsers), can then cross check the public log when they receive a SSL certificate for a particular domain to verify that the certificate is in fact in the log. In the current draft of the CT proposal, the proof from the public CT log server is included directly in the response from the certificate. However, it is entirely feasible that clients could contact log servers directly if they wish to validate that the certificate has been logged themselves.

What makes this concept so cool to me is that it is a powerful tool for domain owners. Domain owners now have a single place that they can consult to detect mis-issued certificates. By regularly auditing public CT logs for certificates issued against their domains, domain owners can take action when mis-issued certificates are discovered.

The sense that I get from watching the discussions on the IETF mail lists is that public CAs are somewhat skeptical of the value of CT, which is completely understandable given the current business models for public CAs. However, I believe that CT does not really undermine the core competency and value that public CAs offer. Public CAs do an excellent job of verifying the identity of people/organizations that request public SSL certificates; domain owners who wish to demonstrate to their visitors that information contained in their public SSL certificates has been thoroughly vetted continue to benefit from public CA certificates. The fundamental issue that public CAs will continue to face is the fact that the current public CA trust model does not have any reliable, technical mechanism in place to prevent publicly trusted, compromised CAs from issuing certificates for any publicly visible domain. Revocation mechanisms unfortunately are largely useless for preventing this behavior in the first place; they are primarily a reactive tool to handle the compromise after the fact.

Google is planning on operating a public CT log; with any luck other large organizations with beefy enough infrastructure (hello Amazon!) and even public CAs will eventually operate their own CT log servers.

Thoughts on using enterprise identity sources for cloud services

A need that I’m starting to see more and more with my clients is the ability to use their own enterprise identity sources (Active Directory, LDAP servers etc.) and authentication solutions with externally hosted cloud services such as Office 365, Citrix GoToMyPC, and numerous other services. The desire to use their own identity sources is perfectly understandable, given that many organizations make significant investments in their identity and access management tools and workflows. Generally, the closer an organization’s employee IAM systems are to HR, management and other administrative personnel, the more accurate the data will be. This is especially important when considering that orphaned accounts of former employees have in the past been juicy attack vectors for malicious hackers.

So, given this desire, how can organizations integrate their internal identity sources with external services? The most common solution I’ve seen is a web enabled identity federation service such as Microsoft’s ADFS 2.0. I’ve also seen other solutions where a cloud based service uses a proprietary protocol to communicate with agents in the enterprise network.

Obviously, exposing internal identity sources to external partners increases the risks to the enterprise, but there are a couple of things that can be done to help mitigate those risks. First, when using web based identity federation services, be sure to use mutual SSL authentication, preferably with the partner using a client SSL certificate issued by your organization. Web identity federation services usually have some form of endpoint authentication built into the protocol, but mutual SSL authentication is a nice additional layer to have in place.

Second, consider deploying additional directory servers that contain copies of information from your main internal identity source. These servers can be synchronized with your main identity sources in such a way that only users that use the external service will have their accounts present in the directory server. Plus, if the lightweight directory server is compromised, the data in your main identity source is still safe (minus accounts compromised on the lightweight directory server of course!)

Finally, be sure to put in place some type of monitoring service on your endpoint. The protocols that will be supported by your identity source’s endpoint are well defined, so you should be able to write SEIM rules to detect unexpected web traffic targeting your endpoint. If you use a reverse proxy to terminate the SSL connection from the external source, you can do deep packet inspection on the requests before forwarding them to the internal federation service for processing.

While exposing your identity sources and authentication infrastructure to external partners sounds scary, it will become more and more of a requirement as enterprise services continue to migrate to the cloud. Stay ahead of the game and plan your architectures accordingly!

Using AD LDS to create new views of Active Directory data

During a recent engagement, I was integrating a web portal with the client’s Active Directory service to permit end users to access the portal using their Active Directory credentials. The web portal was coded to treat AD as plain LDAP directory server, meaning that the portal didn’t support any of the native AD authentication methods (NTLM, Kerberos etc.) Additionally, the portal made assumptions that user account objects were all stored in a single LDAP container object,

As any experienced consultant knows, assumptions almost never hold when you arrive onsite. In my case, the client’s AD infrastructure scattered user accounts throughout various OUs within the domain. In general, segregating user accounts by placing them in different OUs is a good security practice since it helps split user accounts across different administrative domains.

In order to create a view of the user accounts that satisfied the web portal’s assumptions, I investigated using Microsoft’s Lightweight Directory Services (AD LDS). Basically, AD LDS is a lightweight LDAP server that ships with a set of utilities that permit it to synchronize with a full blown Active Directory instance. Furthermore, an AD LDS instance can be created with a completely different DN structure, yet be synchronized with objects that reside in another DN structure in an Active Directory instance.

In addition, AD LDS can authenticate users against their domain credentials that are stored in the target Active Directory instance. This is made possible by using the userProxy class to store user accounts in AD LDS. The userProxy object basically permits AD LDS to act as a proxy to the backend AD instance. Thus, when a simple LDAP bind is performed against the AD LDS instance, it authenticates to the backend AD instance using native AD authentication methods.

Using AD LDS as a proxy for user account information stored in Active Directory has several important advantages:

  • Any applications which require attributes that are not included in AD can be added to the AD LDS schema without impacting the Active Directory schema.
  • Custom views of AD data can be created to place AD objects in different locations within the directory structure if needed by applications.
  • Active Directory data can be exposed to applications without exposing the AD server directly to applications.
  • If the backend AD servers aren’t configured to support LDAPS (!!), an AD LDS instance can be setup to provide LDAPS connections to applications.

Sychronizing AD LDS with Active Directory is fairly well documented; the biggest hurdles I had to overcome was identifying the correct set of attributes to include in my AD LDS userProxy objects. The following sites proved invaluable in getting my installation functional:

  • http://technet.microsoft.com/en-us/library/cc794836%28v=ws.10%29
  • http://support.microsoft.com/kb/923835
  • http://blogs.msdn.com/b/jeff/archive/2007/04/01/synchronize-active-directory-to-adam-with-adamsync-step-by-step.aspx
  • http://www.avantgardetechnologies.com.au/2011/06/ldap-error-occured-ldapaddsw-object.html

In summary, consider using AD LDS as a solution to re-use AD credentials for applications.

Simple Security.