UnboundID LDAP SDK for Java 7.0.0

We have just released version 7.0.0 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for this release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • The LDAP SDK now requires Java 8 or later. Java 7 is no longer supported.
  • We improved the behavior of LDAP connection pools when they are configured to invoke a health check when checking out a connection from the pool. Previously, if a connection was found to be invalid during checkout, the LDAP SDK would create a new connection to replace it, but would continue iterating through other connections in the pool trying to find an existing valid connection. It will now return the newly created connection immediately without checking other existing connections, which can substantially reduce the time to check out a connection in a scenario where many connections have been invalidated (e.g., by a server shutdown).
  • We added a new compare-ldap-schemas command-line tool that can be used to identify differences between the schemas of two LDAP servers.
  • We improved the behavior that the LDAP SDK uses when authenticating with the GSSAPI SASL mechanism. Previously, if you didn’t explicitly provide a JAAS configuration file to use for the attempt, the LDAP SDK would create a new one for each bind attempt. This would create a lot of temporary files that would need to be cleaned up when the JVM exited, and they might not get cleaned up properly if they JVM exits abnormally (e.g., it’s killed or if the JVM crashes). It would also require a small amount of additional memory for each bind attempt, since it has to remember another file to be deleted. Now, the LDAP SDK will be able to reuse the same generated configuration file for all GSSAPI bind requests that use the same JAAS settings, which will slightly improve performance, reduce memory usage, and reduce disk space consumption.
  • We added experimental client-side support for the relax rules support as defined in draft-zeilenga-ldap-relax-03. This draft doesn’t specify an OID for the control, but at least a couple of servers (OpenLDAP and ForgeRock OpenDJ) have implemented support for the control with an OID of 1.3.6.1.4.1.4203.666.5.12, so the LDAP SDK uses that OID for the control.
  • We added client-side support for a number of proprietary controls used by the ForgeRock OpenDJ directory server. These include:

    • A transaction ID request control, which can be included in an operation request to provide a transaction ID that will appear in the access log message for that operation.
    • A replication repair request control, which can be included in a write request to indicate that the associated change should not be replicated.
    • Change sequence number request and response controls, which can be used with a write operation to obtain the replication CSN that the server assigned to that operation.
    • Affinity request control, which can be included in related requests sent through an LDAP proxy server to consistently route them to the same LDAP server instance.
  • We added connection pool health checks for use in conjunction with the Ping Identity Directory Server, including:

    • One that will attempt to determine whether there are any active alerts in the server that cause it to consider itself to be either degraded or unavailable.
    • One that will assess the replication backlog and can consider a server unavailable if it has too many outstanding changes, or if the oldest outstanding change was originally processed too long ago.
    • One that will attempt to determine whether the server is in lockdown mode.
  • We updated the CryptoHelper class to add convenience methods for generating SHA-256, SHA-384, and SHA-512 digests from byte arrays, strings, and files. There are also generic versions of these methods that can be used with user-specified digest algorithms.
  • We added methods for normalizing JSON values and JSON object filters. This can help make it possible to compare two JSON object filters to determine whether two JSON object filters are equivalent.
  • We updated the BouncyCastleFIPSHelper class to add a constant with the name of a system property that can be used to enable support for the MD5 digest algorithm, which may be needed if you’re using the 1.0.2.4 or later version of the bc-fips jar file and need to use the MD5 message digest for some reason.

Ping Identity Directory Server 10.0.0.0

We have just released version 10.0.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Important Notices

  • As of the 10.0 release, the Directory Server only supports Java versions 11 and 17. Support for Java 8 has been removed, as a critical component (the embedded web container we use to support HTTP requests, including the Directory REST API, SCIM, and the Administration Console) no longer supports Java 8.
  • As of the 10.0 release, we are no longer offering the Metrics Engine product as part of the Directory Server suite (the Directory Proxy Server and Synchronization Server are still included, as is the Server SDK for developing custom extensions). You should instead rely on the server’s ability to integrate with other monitoring software, through mechanisms like our support for OpenMetrics (used by Prometheus and other software), StatsD, and the Java Management Extensions (JMX).

Summary of New Features and Enhancements

  • Added support for inverted static groups [more information]
  • Added support for post-LDIF-export task processors, which can be used to perform custom processing after successfully exporting an LDIF file, including the option to upload the resulting file to an Amazon S3 bucket [more information]
  • Added a new log file rotation listener that can be used to upload newly rotated log files to a specified Amazon S3 bucket [more information]
  • Added a new amazon-s3-client command-line tool [more information]
  • Added authentication support to the Directory REST API [more information]
  • Added support for a generate access token request control [more information]
  • Added support for configuring the server with a single database cache that may be shared across all local DB backends [more information]
  • Added an option to automatically re-encode passwords after changing the configuration of the associated password storage scheme [more information]
  • Exposed a request-handler-per-connection configuration property in the LDAP connection handler configuration [more information]
  • Updated the encrypt-file tool to add a --re-encrypt argument [more information]
  • Updated the encrypt-file tool to add a --find-encrypted-files argument [more information]
  • Updated the replication server and replication domain configuration to add a new missing-changes-policy property that can be used to customize the way the server behaves in the event that missing changes are detected in the environment, and the server will now remain available by default under a wider range of circumstances that may not represent actual problems
  • Significantly improved performance for creating a backup, restoring a backup, or performing online replica initialization
  • Significantly improved static group update performance
  • Improved performance for the process of validating the server state immediately after completing an update
  • Added a split-ldif tool that can be used to split a single LDIF file into multiple sets for use in setting up an entry-balanced deployment with the Directory Proxy Server
  • Updated the bcrypt password storage scheme to include support for the 2b variant (in addition to the existing 2y, 2a, and 2x variants)
  • Updated the HTTP connection handler to add an option for performing SNI hostname validation during TLS negotiation
  • Updated the backup tool to display a warning when creating a compressed backup of an encrypted backend, since encrypted backends cannot be effectively compressed, but attempting to do so will make the backup process take longer
  • Updated the dsreplication command so that it uses a separate log file per subcommand, and so that log files representing failed runs of the tool are archived rather than overwritten by subsequent runs
  • Removed the dsreplication removed-defunct-server subcommand, which is better provided through the dedicated remove-defunct-server tool
  • Removed the dsreplication cleanup-local-server subcommand, which is better provided through the remove-defunct-server --performLocalCleanup command
  • Updated dsreplication initialize-with-static-topology to add an --allowServerInstanceDelete argument that can be used to remove servers from the topology if they are not included in the provided JSON file
  • Updated dsreplication initialize-with-static-topology to add an --allowDomainIDReuse argument that can be used to allow domain IDs to be used with different base DNs
  • Updated the check-replication-domains tool so that it no longer requires the --serverRoot argument
  • Updated the replication server configuration to add an option that can be used to include information about all remote servers in monitor messages, which can be useful in large topologies where that can constitute a large amount of data
  • Added support for an access log field request control that can be used to include arbitrary fields in the log message for the associated operation
  • Updated the configuration API to treat patch operations with empty arrays as a means of resetting the associated configuration property
  • Added the ability to configure connect and response timeouts when connecting to certain external services over HTTP, including CyberArk Conjur instances, HashiCorp Vault instances, the Pwned Passwords service, and YubiKey OTP validation servers
  • Updated the Synchronization Server to improve performance when setting the startpoint to the end of the changelog for an Active Directory server
  • Reduced the default amount of memory allocated for the export-ldif and backup tools

Summary of Bug Fixes

  • Fixed an issue in which the Directory REST API could fail to strip out certain kinds of encoded passwords in responses to clients (although only to clients that were authorized to access those attributes)
  • Improved the way that the replication generation ID is computed, which can help ensure the same ID is generated across replicas when they are populated by LDIF import instead of online replica initialization
  • Fixed an issue that could cause an error while trying to initialize aggregate pass-through authentication handlers
  • Fixed an issue that could cause “invalid block type” errors when interacting with compressed files
  • Fixed an issue that could prevent the server from properly including an encrypted representation of the new password in the changelog entry for a password modify extended operation when the server was configured with the changelog password encryption plugin
  • Fixed an issue in which the server could fail to update a user’s password history on a password change that included a password update behavior request control indicating that the server should ignore password history violations
  • Fixed an issue that could cause the server to add two copies of the current password in the password history when changing a password with the password modify extended operation
  • Fixed an issue in which the server could incorrectly allow a user to set an empty password. Even though that password could not be used to authenticate, the server should not have allowed it to be set
  • Fixed an issue that could cause the dictionary password validator to incorrect accept certain passwords that contained a dictionary word as a substring that was larger than the maximum allowed percentage of the password
  • Fixed an issue in which the server could be unable to properly interpret the value of the allow-pre-encoded-passwords configuration property in password policies defined in user data that were created prior to the 9.3 release of the server
  • Fixed an issue in which the server may not have properly applied replace modifications for attributes with options
  • Fixed an issue in which the first unsuccessful bind attempt after a temporary failure lockout had expired may not be counted as a failed attempt toward a new failure lockout
  • Fixed an issue in which running manage-profile generate-profile against an updated server instance could result in a profile that may not be usable for setting up new instances
  • Fixed an issue in which dsreplication initialize could suggest using the --force argument in cases where that wouldn’t help, like when attempting to authenticate with invalid credentials
  • Fixed an issue with dsreplication enable-with-static-topology in which the server could report an error when trying to connect to a remote instance
  • Fixed an issue with dsreplication enable-with-static-topology in which case sensitivity in base DNs was not handled properly
  • Fixed an issue in which the remove-defunct-server command could fail in servers configured with the AES256 password storage scheme
  • Fixed an issue that could cause a replication error if missing changes were found for an obsolete replica that is not configured in all servers
  • Fixed an issue in which the server did not check the search time limit often enough during very expensive index processing, which could allow the server to process a search request for substantially longer than the maximum time limit for that operation
  • Fixed an issue that caused the server to incorrectly include client certificate messages in the expensive operations access log
  • Fixed an internal error that could be encountered if an administrative alert or alarm is raised at a specific point in the shutdown process
  • Fixed an issue with synchronizing Boolean attributes (e.g., “enabled”) to PingOne
  • Fixed an issue in which the Synchronization Server could fail to properly synchronize changes involving the unicodePwd attribute to Active Directory if the sync class was not configured with a DN map
  • Fixed an issue that could cause the create-sync-pipe-config command to improperly generate correlated attribute definitions for generic JDBC sync destinations
  • Fixed an error that could prevent manage-topology add-server from adding a Synchronization Server instance to a topology that already had at least two Synchronization Server instances
  • Fixed an issue in which the server did not properly log an alternative authorization DN for multi-update operations that used proxied authorization
  • Fixed an issue in which dsjavaproperties --initialize could result in duplicate arguments in the java.properties file
  • Fixed an issue that could cause a spurious message to be logged to the server’s error log when accessing the status page in the Administration Console

Inverted Static Groups

In the 10.0 release, we’re introducing support for inverted static groups, which try to combine the primary benefits of traditional static groups and dynamic groups without their most significant disadvantages.

Traditional static groups contain an attribute (either member or uniqueMember, depending on the group’s object class) explicitly listing the DNs of the members of that group. They are pretty straightforward to use and are widely supported by LDAP-enabled applications, but as the number of members in the group increases, so does the size of the entry and the cost of reading and writing that entry and updating group membership.

Traditional static groups also support nesting, but it’s not necessarily easy to distinguish between members that are users and those that are nested groups. The server has to maintain an internal cache so that it can handle nested memberships efficiently, and this requires both extra memory consumption and a processing overhead when the group is updated.

Dynamic groups, on the other hand, don’t have an explicit list of members, but instead are defined with one or more LDAP URLs whose criteria will be used for membership determinations. Because there is no member list to maintain, dynamic groups don’t have the same scalability issues as traditional static groups, and the number of members in the group isn’t a factor when attempting to determine whether a specific user is a member. However, dynamic groups aren’t as widely supported as traditional static groups among LDAP-enabled applications, there’s no way to directly add or remove members in a dynamic group (at least, not without altering the entries in a way that causes them to match or stop matching the membership criteria, which varies on a group-by-group basis), and they don’t support nesting.

Inverted static groups provide a way to explicitly manage group membership like with traditional static groups, but with the scalability of dynamic groups. Rather than storing the membership as a list of DNs in the group entry itself, each user entry has a list of the DNs of the inverted static groups in which they’re a member (in the ds-member-of-inverted-static-group-dn operational attribute). This means that the number of members doesn’t affect the performance of many group-related operations, like adding a new member to the group, removing an existing member from the group, or determining whether a user is a member of the group.

The only way in which the size of the group does impact performance is if you want to retrieve the entire list of members for the group (which you can do by performing a subtree search to find entries whose isMemberOf attribute has a value that matches the DN of the target group). While this is slower than simply retrieving a traditional static group entry and retrieving the list of member DNs, this is actually not an analogous comparison for a couple of key reasons:

  • Retrieving the list of member DNs from a traditional static group only gives you the DNs of the member entries. That isn’t enough if you need the values of any other attributes from the member entries.
  • Retrieving the list of member DNs from a traditional static group doesn’t work well if that group includes one or more nested groups. There’s no good way to tell which of the member DNs reference users and which represent nested groups, and the member DN list won’t include members of the nested groups.

As such, the best way to retrieve a list of all members of a traditional static group is also to perform a subtree search that targets the isMemberOf attribute, and it should be at least as fast to do that for an inverted static group as it is for a traditional static group.

The other key difference that inverted static groups have over traditional static groups lies in the way that we handle nested membership. As previously noted, traditional static groups can include both user DNs and group DNs in their membership attribute, and there’s not a good way to distinguish between them. Inverted static groups distinguish between user members and nested groups. Rather than adding a nested group to the inverted static group as a regular member, you need to add the DN of the nested group to the ds-nested-group-dn attribute of the inverted static group entry. This does make it possible to distinguish between user entries and nested groups, and it allows the server to handle nesting for these types of groups without a separate cache or expensive processing.

The main disadvantage that inverted static groups have in comparison to traditional static groups is that because they are a new feature, existing applications don’t directly support them. If an application only cares about making group membership determinations, makes those determinations using the isMemberOf attribute, and doesn’t need to alter group membership, then it should work just as well with inverted static groups as it does with traditional static groups. However, if it does need to alter group membership, or if it doesn’t support using the isMemberOf attribute, then that’s a bigger hurdle to overcome. To help with that, we’re including a “Traditional Static Group Support for Inverted Static Groups” plugin that can be used to allow clients to interact with inverted static groups in some of the same ways they might try to interact with traditional static groups. This includes:

  • The plugin will intercept attempts to modify the group entry to add or remove member or uniqueMember values, and instead make the corresponding updates to the ds-member-of-inverted-static-group-dn attribute in the target user entries.
  • The plugin can generate a virtual member or uniqueMember attribute for the group entry. It can do this in a few different ways, which may have different performance characteristics:

    • It can do it in a way that works for compare operations or equality search operations that target the membership attribute but don’t actually attempt to retrieve the membership list. This is the most efficient way to determine if a traditional static group has a specific DN in its list of members, and it should be about as fast to make this determination for an inverted static group as it is for a traditional static group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct members of the group (excluding nested members). The performance of this does depend on the number of direct members in the group.
    • It can do it in a way that attempts to populate the attribute with a list of all of the direct and nested members of the group. The performance of this depends both on the number of direct members in the group as well as the types and sizes of the nested groups.

Support for the Amazon S3 Service

The data stored in the Directory Server is often absolutely critical to the organizations that use it, so it’s vital to have a good backup strategy, and that must include some kind of off-site mechanism. Amazon’s S3 (Simple Storage Service) is a popular cloud-based mechanism that is often used for this kind of purpose, and in the 10.0 release, we’re introducing a couple of ways to have the Directory Server take advantage of it. In particular, you can now easily use it as off-site storage for LDIF exports and log files. We also include a new amazon-s3-client tool that can be used to interact with the S3 service from the command line.

Post-LDIF-Export Task Processors

We’ve introduced a new API in the server that can be used to have the server perform additional processing after successfully exporting data to an LDIF file as part of an administrative task (including those created by a recurring task). You can use the Server SDK to create custom post-LDIF-export task processors that do whatever you want, but we’re including an “Upload to S3” implementation that can copy the resulting export file (which will ideally have already been compressed and encrypted during the export process) to a specified S3 bucket. This processor includes retention support, so you can have it automatically remove previous export files created more than a specified length of time ago, or you can have it keep a specified number of the newest files in the bucket.

The export-ldif command-line tool now has a new --postExportProcessor argument that you can use to indicate which processors should be invoked for after the export file has been successfully written, and you can also specify which processors to use when creating the tasks programmatically (for example, using the task support in the UnboundID LDAP SDK For Java) or by simply adding an appropriate formatted entry to the server’s tasks backend. We’ve also updated the configuration for the LDIF export recurring task to include a new post-ldif-export-task-processor property to specify which processor(s) should be invoked for LDIF exports created by that recurring task.

Note that the post-LDIF-export task processor functionality is only available for LDIF exports invoked as recurring tasks, and not those created using the export-ldif tool in the offline, standalone mode. This is because post-LDIF-export task processors may need access to a variety of server components, and it’s difficult to ensure that all necessary components would be available outside of the running server process.

The Upload to S3 Log File Rotation Listener

The Directory Server already had an API for performing custom processing whenever a log file is rotated out of service, including copying the file to an alternative location on the server filesystem or invoking the summarize-access-log tool on it. In the 10.0 release, we’re including a new “Upload to S3” log file rotation listener that can be used to upload the rotated log file to a specified S3 bucket. This is available for all of the following types of log files:

  • Access
  • Error
  • Audit
  • Data recovery
  • HTTP operations
  • Sync
  • Debug
  • Periodic stats

Although there are obvious benefits to copying all of these types of log files to an external service, I want to specifically call out the importance of having off-site backups for the data recovery log. The data recovery log is a specialized type of audit log that keeps track of all write operations processed by the server in a form that allows them to be easily replayed or reverted should the need arise. The data recovery log can be used as a kind of incremental backup mechanism for keeping track of changes made since the most recent backup or LDIF export, and in a worst-case scenario in which all server instances are lost and you need to start over from scratch, you can restore the most recent backup or import the most recent LDIF export, and the replay any additional changes from the data recovery log that were made after the backup or export was created.

As with the Upload to S3 post-LDIF-export task processor, the new log file rotation listener also includes retention support so that you can choose to keep either a specified number of previous log files, or those uploaded less than a specified length of time in the past.

The amazon-s3-client Command-Line Tool

The new amazon-s3-client tool allows you to interact with the S3 service from the command line, including in shell scripts or batch files. It supports the following types of operations:

  • List the existing buckets in the S3 environment
  • Create a new bucket
  • Remove an existing bucket (optionally removing the files that it contains)
  • List the files in a specified bucket
  • Upload a file to a specified bucket
  • Download a specified file from a bucket
  • Download one or more of the newest files from a specified bucket, based on the number of files to download, the age of files to download, or files created after a specified time
  • Remove a file from a bucket

This allows you to perform a number of functions, including manually upload additional files that the server doesn’t support uploading automatically, or to download files for use in bootstrapping new instances. It can generate output as either human-readable text or machine-parsable JSON.

Authentication Support in the Directory REST API

The Directory REST API allows you to submit requests and retrieve data from the server using a JSON-formatted HTTP-based API. It’s always had support for all of the typical operations needed for interacting with the data, like:

  • Creating new entries
  • Updating existing entries
  • Removing existing entries
  • Retrieving individual entries
  • Searching for all entries matching a given set of criteria

Within the last few releases, we’ve also introduced support for a wide variety of request controls, and also certain extended operations. But one of the big gaps between what the server offered over the Directory REST API versus what you could get via LDAP was in its support for authentication. The Directory REST API has always supported authorizing individual requests using either HTTP basic authorization or OAuth 2 bearer tokens, but it didn’t really provide any good way to authenticate clients and verify client credentials. And if you wanted to authorize requests with stronger authentication than just a DN and password, you had to have an external service configured for issuing OAuth tokens.

This is being addressed in the 10.0 release with a new /authenticate endpoint, which currently supports the following authentication methods:

  • password — Username or bind DN and a static password
  • passwordPlusTOTP — Username or bind DN, a static password, and a time-based one-time password (TOTP)
  • passwordPlusDeliveredOTP — Username or bind DN, a static password, and a one-time password delivered through some out-of-band mechanism like email or SMS
  • passwordPlusYubiKeyOTP — Username or bind DN, a static password, and a one-time password generated by a YubiKey device

We’re also adding other new endpoints in support of these mechanisms, including:

  • One for generating a TOTP secret, storing it in the user’s entry, and returning it to the client so that it can be imported into an app (like Authy or Google Authenticator) for generating time-based one-time passwords for use with the passwordPlusTOTP authentication method.
  • One for revoking a TOTP secret so that it can no longer be used to generate time-based one-time passwords that will be accepted by the passwordPlusTOTP authentication method.
  • One for generating a one-time password and delivering it to the user through some out-of-band mechanism so that it can be used to authenticate with the passwordPlusDeliveredOTP authentication method.
  • One for registering a YubiKey device with a user’s entry so that it can be used to authenticate with the passwordPlusYubiKeyOTP method.
  • One for deregistering a YubiKey device with a user’s entry so that it can no longer be used to authenticate with the passwordPlusYubiKeyOTP method.

If the authentication is successful, the response may include the following content:

  • An access token that can be used to authorize subsequent requests as the user via the Bearer authorization method.
  • An optional set of attributes from the authenticated user’s entry.
  • If applicable, the length of time until the user’s password expires.
  • A flag that indicates whether the user is required to choose a new password before they will be allowed to do anything else.
  • An optional array of JSON-formatted response controls.

Generated Access Tokens in the Directory Server

We’ve added support for a new “generate access token” request control that can be included in a bind request to indicate that if the bind succeeds, the server should return a corresponding response control with an access token that can be used to authenticate subsequent connections via the OAUTHBEARER SASL mechanism. While this control is used behind the scenes in the course of implementing the new authentication support in the Directory REST API, it can also be very useful in certain LDAP-only contexts.

For example, this ability may be especially useful in cases where you want to authenticate a client with a mechanism that relies on single-use credentials, like the UNBOUNDID-TOTP, UNBOUNDID-DELIVERED-OTP, or UNBOUNDID-YUBIKEY-OTP SASL mechanism. In such cases, the credentials can only be used once, which means you can’t use them to authenticate multiple connections (for example, as part of a connection pool), or to re-establish a connection if the initial one becomes invalidated.

Shared Database Cache for Local DB Backends

You can configure the Directory Server with multiple local DB backends. You should do this if you want to have multiple naming contexts for user data, and you can also do it for different portions of the same hierarchy if you want to maintain them separately for some reason (e.g., to have them replicated differently, as in an entry-balanced configuration where some of the DIT needs to be replicated everywhere while the entry-balanced portion needs to be replicated only to a subset of servers).

Previously, each local DB backend had its own separate database cache, which had to be sized independently. This gives you the greatest degree of control over caching for each of the backends, which may be particularly important if you don’t have enough memory to hold everything based on the current caching configuration, but it can also be a hassle in some deployments. And if you don’t keep track of how much you’ve allocated to each backend, you could potentially oversubscribe the available memory.

In the 10.0 release, we’re adding the ability to share the same cache across all local DB backends. To do this, set the use-shared-database-cache-across-all-local-db-backends global configuration property to true, and set the shared-local-db-backend-database-cache-percent property to the percentage of JVM memory to allocate to the cache. Note that this doesn’t apply to either the LDAP changelog or the replication database, both of which intentionally use very small caches because their sequential access patterns don’t really require caching for good performance.

Re-Encoding Passwords on Scheme Configuration Changes

The Directory Server has always had support for declaring certain password storage schemes to be deprecated as a way of transparently migrating passwords from one scheme to another. If a user successfully authenticates in a manner that provides the server with access to the clear-text password (which includes a number of SASL mechanisms in addition to regular LDAP simple binds), and their password is currently encoded with a scheme that is configured as deprecated, then the server will automatically re-encode the password with the currently configured default scheme.

Deprecated password storage schemes can only be used to migrate users from one scheme to another, but there may be legitimate reasons to want to re-encode a user’s password without changing the scheme. For example, several schemes use multiple rounds of encoding to make it more expensive for attackers to attempt to crack passwords, and you may want to have passwords re-encoded if you change the number of rounds that the server uses.

In the 10.0 release, we’re adding a new re-encode-passwords-on-scheme-config-change property to the password policy configuration. If this is set to true, if a client authenticates in a manner that provides the server with access to their clear-text password, and if their password is currently encoded with different settings than are currently configured for the associated scheme, then the server will automatically re-encode that password with the default scheme using current settings. This functionality is available for the following schemes:

  • AES256 — If there is a change in the definition used to encrypt passwords.
  • ARGON2, ARGON2D, ARGON2I, ARGON2ID — If there is a change in the iteration count, parallelism factor, memory usage, salt length, or derived key length.
  • BCRYPT — If there is a change in the cost factor.
  • PBKDF2 — If there is a change in the digest algorithm, iteration count, salt length, or derived key length.
  • SCRYPT — If there is a change in the CPU/memory cost factor exponent, the block size, or the parallelization parameter.
  • SSHA, SSHA256, SSHA384, SSHA512 — If there is a change in the salt length.

It is also possible to enable this functionality for custom password storage schemes created using the Server SDK by overriding some new methods added to the API.

Separate Request Handlers for Each LDAP Client Connection

When the Directory Server accepts a new connection from an LDAP client, it hands that connection off to a request handler, which will be responsible for reading requests from that client and adding them to the work queue so that they can be picked up for processing by worker threads. By default, the server automatically determines the number of request handlers to use (although you can explicitly configure the number if you want), and a single request handler may be responsible for reading requests from several connections.

The vast majority of the time, having request handlers responsible for multiple connections isn’t an issue. Just about the only thing that a request handler has to do is wait for requests to arrive, read them, decode them, and add them to the work queue so that they will be processed. While it’s doing this for a request from one client, any other clients that are sending requests at the same time will need to wait, but because the entire process of reading, decoding, and enqueueing a request is very fast, it’s rare that processing for one client will have any noticeable impact on the request handler’s ability to process other clients. However, there are a couple of instances in which this might not be the case:

  • If a client is submitting a very large number of asynchronous requests at the same time.
  • If the server needs to perform TLS negotiation on the connection to set up a secure communication channel, and some part of that negotiation is taking a long time to complete.

In practice, neither of these is typically an issue. Even if there are a ton of asynchronous requests submitted all at once, it’s still pretty unlikely that it will cause any noticeable starvation in the server’s ability to read requests from other clients. And the individual steps of performing TLS negotiation also tend to be processed very quickly. However, there have been exceptional cases in which these kinds of processing may have had a noticeable impact. In such instances, the best way to deal with that possibility is to have the server use a separate request handler for each connection that is established so that the process of reading, decoding, and enqueueing requests from one client cannot impact the server’s ability to do the same for other clients that may be sending requests at exactly the same time. In the unlikely event that the need arises in your environment, you can now use the request-handler-per-connection property in the connection handler configuration to cause the server to allocate a new request handler for every client connection that is established.

Updates to the encrypt-file Tool

As its name implies, the encrypt-file tool can be used to encrypt (or decrypt) the contents of files, either using a definition from the server’s encryption settings database or with an explicitly-provided passphrase. In many cases, if a file used by the server (or a command-line tool) is encrypted with an encryption settings definition, the server can detect that and automatically decrypt it as it’s reading the contents of that file.

If an administrator wishes to retire an encryption settings definition for some reason, and especially if they want to remove it from the encryption settings database, they need to ensure that it is no longer needed to decrypt any existing encrypted data. In the past, some customers have overlooked encrypted files when ensuring that a definition is no longer needed. To help avoid that, we’ve added two new arguments to the encrypt-file tool:

  • --find-encrypted-files {path} — This argument can be used to search for encrypted files below the specified path on the server filesystem. By default, it will find files that have been encrypted with any encryption settings definition or with a passphrase, but you can also provide the --encryption-settings-id argument to indicate that you only want it to find files encrypted with the specified definition.
  • --re-encrypt {path} — This argument can be used to re-encrypt an existing encrypted file, using either a different encryption settings definition or a new passphrase.

If you do plan to retire an existing encryption settings definition, then you should use the encrypt-file --find-encrypted-files command to identify any files that have been encrypted with that definition, and then use encrypt-file --re-encrypt to re-encrypt them with a different definition so that the server can still access them even if you remove the retired definition from the encryption settings database.

UnboundID LDAP SDK for Java 6.0.11

We have just released version 6.0.11 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

Note that this is the last release of the LDAP SDK that will offer support for Java 7. As of the next release (which is expected to have a version of 7.0.0), the LDAP SDK will only support Java 8 and later.

You can find the release notes for the 6.0.11 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We updated the ldapsearch and ldapmodify command-line tools to provide better validation for the --proxyAs argument, which includes the proxied authorization v2 request control in the requests that they issue. Previously, they would accept any string as the authorization ID value, but they will verify that it is a valid authorization ID using the form “dn:” followed by a valid DN or “u:” followed by a username.
  • We updated the Filter class so that the methods used to create substring filters are more user-friendly when the filter doesn’t contain all types of components. Previously, it expected a substring component to be null if that component wasn’t to be included in the request, and it would create an invalid filter if the component was provided as an empty string. It will now treat components provided as empty strings as if they had been null.
  • We updated the logic that the LDAP SDK uses to pare entries down to a specified set of attributes (including in the in-memory directory server and the ldifsearch command-line tool) to improve its behavior if it encounters an entry with a malformed attribute description (for example, one that contains characters that aren’t allowed). Previously, this would result in an internal error, but it will now make a best-attempt effort to handle the invalid name.
  • We updated the TimestampArgument class to allow it to accept timestamps in the ISO 8601 format described in RFC 3339 (e.g., 2023-11-30T01:02:03.456Z). Previously, it only accepted timestamps in the generalized time format (or a generalized time representation that didn’t include any time zone information, which was treated as the system’s local time zone).
  • We updated the JSONBuffer class to add an appendField method that can be used to append a generic field without knowing the value type. Previously, it only allowed you to append fields if you knew the type of the value.
  • We added new BinarySizeUnit and DecimalSizeUnit enums that can be used when dealing with a quantity of data, like the size of a file or the amount of information transferred over a network. Each of the enums supports a variety of units (bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, zettabytes, and yottabytes), but the BinarySizeUnit variant assumes that each subsequent unit is 1024 times greater than the previous (e.g., one kilobyte is treated as 1024 bytes), while DecimalSizeUnit assumes that each subsequent unit is 1000 times greater than the previous (e.g., one kilobyte is treated as 1000 bytes).
  • We updated the client-side support for invoking the LDIF export administrative task in the Ping Identity Directory Server to include support for activating one or more post-LDIF-export task processors, which can be used to perform additional processing after the data is successfully exported.

UnboundID LDAP SDK for Java 6.0.10

We have just released version 6.0.10 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository. You can find the release notes for the 6.0.10 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added a new ReusableReferralConnector interface that makes it possible to create referral connectors that can be reused for following multiple referrals. We’ve added a new PooledReferralConnector implementation that uses connection pools for improved performance when following multiple referrals.
  • We fixed an issue in which the parallel-update tool could write malformed data to the reject log file when multiple write operations were rejected concurrently.
  • We added a PLAINBindRequest.encodeCredentials method that can be used to retrieve the encoded credentials for a SASL PLAIN bind request.
  • We added JSONNumber.getValueAsInteger and getValueAsLong methods that will return the value of a JSON number as an Integer or Long, but only if the conversion can be made losslessly. The methods will return null if the value is a floating-point number, or if the value is outside the supported range for the data type.
  • We added a StaticUtils.getBacktrace method that can be used to retrieve a compact, single-line string representation of a stack trace representing the code location from which the method was called.
  • We added support for a new Ping-proprietary “access log field” request control, which can be used to indicate that the server should include a specified set of name-value pairs in the access log message for the associated operation. We also updated the ldapsearch and ldapmodify tools to add a new --accessLogField argument to include this control in requests.
  • We added support for a new Ping-proprietary “generate access token” request control that can be included in a bind request to indicate that the server should include an access token in a corresponding response control included in the response to a successful bind operation. That access token can be used to authenticate to the Ping Identity Directory Server with the OAUTHBEARER SASL mechanism. This may be especially useful when initially authenticating to the Directory Server with a mechanism that relies on single-use credentials (e.g., UNBOUNDID-TOTP, UNBOUNDID-DELIVERED-OTP, or UNBOUNDID-YUBIKEY-OTP) because it allows you to establish multiple connections (e.g., using a connection pool or to replace connections that are no longer valid). We also updated the ldapsearch and ldapmodify tools to add a new --generateAccessToken argument to request that the server return an access token in the bind response.
  • We updated support for the ds-pwp-state-json virtual attribute to include the has-password-encoded-with-non-current-settings field, which may indicate whether the user has a password that is encoded with settings that are different from the current configuration for the associated password storage scheme, and the non-current-password-storage-scheme-settings-explanations field, which may explain the ways in which the password encoding differs from the current configuration.
  • We updated the documentation to include the latest versions of draft-ietf-kitten-scram-2fa, draft-melnikov-scram-bis, and draft-melnikov-scram-sha3-512 in the set of LDAP-related specifications.

Ping Identity Directory Server 9.3.0.1 (and others) addressing a security issue

We have just released several new versions of the Ping Identity Directory Server to address a security issue that we discovered. The issue is in a component of the server that is only enabled when setting up the Delegated Admin product, and customers who are using that product are strongly advised to upgrade. Customers who are not using the Delegated Admin product should not be affected by the issue.

The following new versions are now available and contain the fix for this issue:

  • 9.3.0.1
  • 9.2.0.2
  • 9.1.0.3
  • 8.3.0.9

The security issue was discovered internally, and we have no reason to believe that it has been independently discovered or exploited. Ping is not currently prepared to provide additional information about the vulnerability at this time, but is expected to release a security advisory with additional details in the future.

All About Data Encryption in the Ping Identity Directory Server

Directory servers are often used to store and interact with sensitive and/or personally identifiable information, so data security and privacy are critical. Encryption both for data as it goes over the wire and for data at rest. TLS (also known by the more outdated term SSL) is the best way to secure data in transit, but it’s also important to have the data encrypted on disk, and not only in the database, but also in backups, LDIF exports, sensitive log files, and files containing secrets, and in a variety of other areas.

We’re very committed to security in the Ping Identity Directory Server, and data encryption is tightly ingrained into the product. You can (and should) enable data encryption when setting up the server for the best level of protection. But there are a lot of components involved in our support for data encryption, and it can be a lot to take in for someone who is new to the product. So I thought I’d write up an overview of all of the components and how they work together.

The information provided here reflects the 9.3.0.0 release. Although much of the foundational information is the same for older versions as well, some of the features I cover were only just introduced in the 9.3 release and aren’t available in older versions.

The encryption-settings Tool

Aside from allowing the setup process to create or import encryption settings definitions, the encryption-settings tool is one of the primary means through which you’ll manage the set of encryption settings definitions and the encryption settings database. It offers the following subcommands:

  • list — Displays a list of the encryption settings definitions that reside in the encryption settings database.
  • create — Creates a new encryption settings definition.
  • delete — Removes an encryption settings definition.
  • export — Exports one or more encryption settings definitions to a passphrase-protected file.
  • import — Imports the encryption settings definitions contained in an export file.
  • set-preferred — Specifies which encryption settings definition should be preferred for subsequent encryption operations.
  • get-data-encryption-restrictions — Displays information about the set of data encryption restrictions that are available for use and which are currently in effect.
  • set-data-encryption-restrictions — Updates the set of data encryption restrictions that are currently in effect.
  • is-frozen — Indicates whether the encryption settings database is currently frozen.
  • freeze — Freezes the encryption settings database with a passphrase.
  • unfreeze — Unfreezes the encryption settings database with the freeze passphrase.
  • supply-passphrase — Supplies a passphrase needed to unlock the encryption settings database in conjunction with the wait-for-passphrase cipher stream provider.

These subcommands will be discussed in more detail below in conjunction with the functions that they provide.

Encryption Settings Definitions

One of the most fundamental components of the Directory Server’s data encryption framework is the encryption settings definition. Encryption settings definitions encapsulate two primary pieces of information:

  • A symmetric encryption key, which is used to actually encrypt and decrypt the data.
  • The cipher transformation, which specifies the algorithm used to perform the encryption and decryption.

Rather than storing the encryption key itself, an encryption settings definition stores the information needed to derive it. Each definition is backed by a password/passphrase, and we use an algorithm (PBKDF2WithHmacSHA256) to generate the key, along with a salt and an iteration count. When creating a new definition (whether during setup or after the fact using the encryption-settings create command), you have the option of specifying the passphrase directly or allowing the server to generate one at random. When creating a definition after the fact, you can also specify the PBKDF2 iteration count, the cipher transformation, and the key length.

Each encryption settings definition has an identifier that is used as a unique name for that definition. This identifier is deterministically generated so that if you create a definition with the same underlying passphrase and other settings (like the cipher transformation, iteration count, and key length) on two different servers, it should result in two definitions with the same identifier and the same underlying encryption key.

Wenever the server encrypts some data, it includes the ID of the definition used to perform that encryption (or a compact token that is tied to the ID) as part of the encrypted output so that we know which key was used to encrypt it, and therefore which key needs to be used to decrypt it.

The Encryption Settings Database

The encryption settings database holds the set of encryption settings definitions that are available for use in the Directory Server. One of those definitions is marked as the preferred definition, which is the one used for encrypting new data by default. Whenever the server needs to encrypt some data, it can either request a specific encryption settings definition by its identifier, but in most cases, it will just fall back to using the preferred definition.

Whenever the server encounters some encrypted data that it needs to decrypt, it will extract the identifier to determine which encryption settings definition was used to encrypt it, retrieve that definition from the encryption settings database, and use it to decrypt the data.

As of the 9.3 release of the Directory Server, the encryption settings database also stores the set of data encryption restrictions that are in effect for the server, and it may optionally be frozen. I’ll cover data encryption restrictions and freezing the encryption settings database a little bit later.

Cipher Stream Providers

The encryption settings database is just a file that contains the encryption settings definitions and some other encryption-related information. Obviously, we don’t want to just leave it sitting around in the clear, because anyone who can access that file can use the information it contains to access any of the encrypted data that the server stores or generates. So we need a way to protect the encryption settings database.

To do that, we use a component called a cipher stream provider. In retrospect, we probably should have chosen a better name for this component, but we had originally thought we might use it for a variety of purposes. But really, we just use it for protecting the contents of the encryption settings database.

The Directory Server includes support for several different types of cipher stream providers, including:

  • A file-based cipher stream provider, which encrypts the encryption settings database with a key derived from a password stored in a file. This isn’t the most secure option, because anyone who can see that file and figure out how to use the password to derive the key used to encrypt the database can get access to the definitions, but it is the one we use by default because it doesn’t require any additional configuration or access to an external service.
  • An Amazon Key Management Service (KMS) cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database, and then encrypts that key with a key stored in KMS. Whenever the server needs to open the encryption settings database, it sends the encrypted key to KMS to be decrypted, and then uses the decrypted key to decrypt the encryption settings database.
  • An Amazon Secrets Manager cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database. It then retrieves a secret password from the Amazon Secrets Manager service and uses that to encrypt the generated key. Whenever the server needs to open the encryption settings database, it retrieves the same secret from Secrets Manager and uses it to decrypt the key so that it can decrypt the encryption settings database.
  • An Azure Key Vault cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from Azure Key Vault rather than Amazon Secrets Manager.
  • A CyberArk Conjur cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from a CyberArk Conjur instance rather than Amazon Secrets Manager.
  • A HashiCorp Vault cipher stream provider, which operates in basically the same way as the Amazon Secrets Manager provider, except that it retrieves the secret password from a HashiCorp Vault instance rather than Amazon Secrets Manager.
  • A PKCS #11 cipher stream provider, which generates a strong key that is used to encrypt the encryption settings database, and then uses a certificate contained in a PKCS #11 token (for example, a hardware security module, or HSM for short) to encrypt that key. Whenever the server needs to open the encryption settings database, it uses the same certificate to decrypt the key, and then uses the decrypted key to decrypt the encryption settings database.
  • A wait-for-passphrase cipher stream provider, which derives an encryption key from a password/passphrase supplied by an administrator, and uses that key to encrypt the contents of the encryption settings database. When the server needs to open the encryption settings database, it will wait for the administrator to interactively supply that passphrase so that it can use it to re-derive the encryption key, which it will then use to decrypt the encryption settings database.

If you’d rather protect the contents of the encryption settings database in some other way, you also have the option of using the Server SDK to create your own custom cipher stream provider.

By default, when you set up the server with data encryption enabled, it will use the file-based cipher stream provider because it’s the only one that can be used without requiring any additional setup. If you want to use an alternative cipher stream provider to protect the encryption settings database, then you have two options:

  • Make an appropriate set of configuration changes with the server online to create the desired cipher stream provider and then activate it by changing the encryption-settings-cipher-stream-provider global configuration property to use it. The server will use the former cipher stream provider to read the encryption settings database, and then it will rewrite it (and therefore re-encrypt it) using the new cipher stream provider. Attempting to change the cipher stream provider while the server is not online won’t work because the new cipher stream provider won’t be able to read the existing encryption settings database.
  • Set up the server with a pre-existing encryption settings database that is already protected with the appropriate cipher stream provider.

Each of these options will be discussed in later sections.

Data Encryption Restrictions

Data encryption restrictions can be used to prevent administrators (or attackers who may gain access to the server system) from performing actions that could potentially grant them access to encrypted data. Restrictions that can be imposed include:

  • prevent-disabling-data-encryption — Prevents updating the configuration to disable data encryption. If you were to disable data encryption, then subsequent writes made to the server would not be encrypted. With this restriction in effect, if you try to make a configuration change to disable data encryption while the server is running, then the server will reject the attempt. If you disable data encryption with the server offline, then it will refuse to start.
  • prevent-changing-cihpher-stream-provider — Prevents updating the configuration from changing which cipher stream provider is used to protect the encryption settings database. If you can control which cipher stream provider is in effect, then you could potentially be able to decrypt the encryption settings database and access the definitions that it contains.
  • prevent-encryption-settings-export — Prevents using the encryption-settings export command to export the encryption settings definitions to a passphrase-protected file. If you can export the encryption settings definitions, then you could create a new encryption settings database with the same definitions, but without any restrictions in place.
  • prevent-unencrypted-ldif-export — Prevents exporting the data in any backend to an unencrypted LDIF file, as that would grant unprotected access to that data.
  • prevent-passphrase-encrypted-ldif-export — Prevents exporting the data in any backend to an LDIF file that is encrypted with a specified passphrase rather than an encryption settings definition. If you can export the data to a file that is encrypted with a passphrase that you know, then that would allow you to decrypt its contents.
  • prevent-unencrypted-backup — Prevents creating an unencrypted backup. Note that even if the backup itself is unencrypted, if the backend contains encrypted data, then it will remain encrypted in the backup.
  • prevent-passphrase-encrypted-backup — Prevents creating a backup that is encrypted with a specified passphrase rather than an encryption settings definition.
  • prevent-decrypt-file — Prevents using the encrypt-file --decrypt command to decrypt a file that has been encrypted, regardless of whether it was encrypted with an encryption settings definition or a passphrase.

For maximum security, we recommend enabling all data encryption restrictions in the server, as that can significantly hamper an attacker’s ability to gain access to encrypted data. But we strongly recommend creating an export of the encryption settings definitions (as will be described in a later section) so that you have a passphrase-protected backup of the definitions that can be used for disaster recovery if you have a really bad day, since you won’t be able to do that once the prevent-encryption-settings-export restriction is in effect.

You can use the encryption-settings set-data-encryption-restrictions command to add and remove data encryption restrictions from the server, whether individually or all at once. For example, if you want to enable all data encryption restrictions, then you can use the command:

$ bin/encryption-settings set-data-encryption-restrictions \
     --add-all-restrictions

You can use the encryption-settings get-data-encryption-restrictions command to display a list of all available restrictions and which are currently in effect.

If you have activated any data encryption restrictions in the server, then we strongly recommend freezing the encryption settings database to prevent those restrictions from being removed. This is covered in the next section.

Freezing the Encryption Settings Database

Freezing the encryption settings database places it in read-only mode. The server and its associated tools will still have access to the definitions that it contains, but you won’t be able to make any changes to the encryption settings database, this includes:

  • Creating new definitions
  • Deleting definitions
  • Importing definitions from an export file
  • Changing which definition is preferred for encrypting new data
  • Making changes to the active set of data encryption restrictions

When you freeze the encryption settings database, you need to provide a freeze passphrase, and that same passphrase will be required to unfreeze the database. If the database is frozen and an attacker gains access to the system, then they won’t be able to make any changes to the set of encryption settings definitions or get around any data encryption restrictions as they don’t know and aren’t able to guess the freeze passphrase.

You can freeze the encryption settings database with the following command:

  $ bin/encryption-settings freeze

You will be interactively prompted for the freeze passphrase (and a second time to confirm it), and the encryption settings database will be placed in read-only mode. It will remain that way until it is unfrozen, which you can do with the command:

  $ bin/encryption-settings unfreeze

And supplying the same passphrase that you used to freeze the database. In either case, you can have the tool obtain the freeze passphrase from a file (via the --passphrase-file argument) rather than having the tool interactively prompt for it.

Enabling Data Encryption During Setup

The best time to enable data encryption is when setting up the server. Although it’s possible to enable data encryption for an existing instance, this doesn’t necessarily offer the same level of protection. In particular:

  • New writes will be encrypted, but any entries that are already in the server will remain unencrypted until they are updated or until the backend is exported to LDIF and re-imported.
  • Indexes will remain unencrypted until the backend is exported to LDIF and re-imported.
  • New records added to the replication database and LDAP changelog will be encrypted, but any existing records in the replication database and LDAP changelog will remain unencrypted.
  • Merely enabling data encryption won’t automatically enable encryption for backups and LDIF exports, although you can turn that on at the same time that you enable data encryption.
  • Any passwords files written to the filesystem during setup (for example, the PIN files needed to access certificate key and trust stores) won’t have been encrypted.

As such, it is strongly recommended that if you’re going to enable data encryption, you do so during setup, and if you want to enable data encryption for an existing instance, you may wish to consider setting up a new instance and migrating the data over to ensure that the maximum amount of protection is in place.

To enable data encryption during setup, provide one of the following arguments:

  • --encryptDataWithRandomPassphrase — Indicates that the server should enable data encryption using an encryption settings definition created using a very strong randomly generated passphrase that won’t be divulged by the server. This is a good option for setting up the first instance in a topology (or a standalone instance to use for testing), but it’s not recommended for setting up multiple instances because they’ll end up with different encryption settings definitions, and you want all servers in the topology to have the same definitions. You can use this argument with either setup or manage-profile setup, and in the latter case, there are no special requirements for the server profile (aside from including this argument in the setup-arguments.txt file, which is true of any of these arguments).

  • --encryptDataWithPassphraseFromFile — Indicates that the server should enable data encryption using an encryption settings definition created using a passphrase that you specify. If you set up all instances with the same passphrase, then they will all end up with the same encryption settings definitions, so this is a suitable option for setting up a multi-instance topology. Also, if you know the passphrase used to create an encryption settings definition, you can use that passphrase to decrypt files that were encrypted with that definition. You can use this argument with either setup or manage-profile setup, and in the case of manage-profile setup, you need to make sure that the server profile contains the passphrase file (for example, in the misc-files directory).

  • --encryptDataWithSettingsImportedFromFile — Indicates that the server should enable data encryption using one or more encryption settings definitions contained in a file created using the encryption-settings export command. This is a good option to use if you set up the first instance with a randomly generated passphrase and want to ensure that subsequent instances have the same definition. It’s also a good option if you want to include multiple encryption settings definitions, or if you want to use definitions created with settings that differ from the default settings that setup would have used (e.g., a different cipher transformation or PBKDF2 iteration count). When using this argument, you also need to provide the --encryptionSettingsExportPassphraseFile argument to specify the passphrase used to protect the export. This argument can be used with either setup or manage-profile setup, and if you use manage-profile setup, then the server profile will need to contain both the export and passphrase files (probably in the misc-files directory).

  • --encryptDataWithPreExistingEncryptionSettingsDatabase — Indicates that the server should enable data encryption using an encryption settings database that you’ve already set up as desired on another instance. This provides the greatest degree of flexibility, as the provided database can be protected with a cipher stream provider other than the default file-based provider, and it can also be locked down with data encryption restrictions and frozen with a passphrase (although you can change the cipher stream provider, enable restrictions, and freeze the database after setup). This option should only be used with manage-profile setup, and the server profile will need to include the following:

    • The encryption settings database itself should be included in the profile in the server-root/pre-setup/config/encryption-settings/encryption-settings-db file.
    • You will need to include any configuration changes needed to configure and activate the cipher stream provider in one or more dsconfig batch files placed in the pre-setup-dsconfig directory.
    • Any additional metadata files that the cipher stream provider needs to access the encryption settings database should must be included in the appropriate locations beneath the server-root/pre-setup directory.

Managing Encryption Settings Definitions

The server setup process can generate encryption settings definitions for you when using either the --encryptDataWithRandomPassphrase or --encryptDataWithPassphraseFromFile arguments. However, if you want to create additional definitions (presumably, using different settings than the one created by default), then the encryption-settings tool can be used to accomplish that. In addition, that tool can be used for other ways of managing encryption settings definitions and the encryption settings database. I’ve already covered using the tool to set data encryption restrictions and to freeze the encryption settings database, but this section will cover using the tool for managing encryption settings definitions.

Exporting Encryption Settings Definitions

One of the most important ways that you can use the encryption-settings tool is to create a passphrase-protected export of the definitions in the encryption settings database. This is vital, because an export is the best way to back up the encryption settings definitions, and if you lose your encryption settings definitions (or lose access to them), then you lose access to any data encrypted with them.

An encryption settings export is better than backing up the encryption settings database for a couple of reasons:

  • A backup of the encryption settings database is tied to the cipher stream provider used to protect it. If the cipher stream provider relies on an external service, and if that service (or the information that the cipher stream provider relies on inside that service) becomes unavailable, then the encryption settings database becomes unusable, and the definitions inside it are inaccessible. An encryption settings export is not tied to the cipher stream provider implementation, so it is more portable and not potentially reliant on an external service.

  • While you can use the backup tool to create a backup of the encryption settings database, that backup will only contain the encryption settings database itself and won’t include any metadata files that the associated cipher stream provider needs to interact with the database (although it will tell you which additional files need to be backed up separately). An encryption settings export doesn’t rely on any other files, but only on the passphrase you chose to protect it.

To create an encryption settings export, use a command like:

$ bin/encryption-settings export \
     --output-file /path/to/output/file

This will interactively prompt you for the passphrase to use to protect the export, and then will write all definitions in the encryption settings database into the specified file. You can also use the --passphrase-file argument to have it obtain the export passphrase from a file rather than interactively, or the --id argument if you only want to back up specific definitions.

This export file (and the passphrase needed to access its contents) should be carefully protected and reliably archived, as it may be the last line of defense that can allow you to access your encrypted data in a worst-case scenario. You shouldn’t need to create another export unless you make changes to the set of encryption settings definitions, but it’s probably a good idea to periodically verify that the archive is still valid and hasn’t succumbed to bit rot.

Importing Encryption Settings Definitions

When setting up a new instance with data encryption, you can use the --encryptDataWithSettingsImportedFromFile argument to use the encryption settings definitions contained in an import file. But if you have created additional definitions after setup and want to make them available in all instances, you can export the definition(s) from the instance in which it was created and import them into the remaining instances. This can be done with encryption-settings import, using a command like the following:

$ bin/encryption-settings import \
     --import-file /path/to/export/file

You will be prompted for the passphrase used to protect the export, or you can provide it non-interactively with the --passphrase-file argument.

In addition, the --set-preferred argument can be used to make the definition marked as preferred in the export as the new preferred definition in the encryption settings database. If the –set-preferred argument is not provided, and if the encryption settings database into which the new definitions are being imported already has one or more existing definitions, then the existing preferred definition will remain the preferred definition.

Note that unlike the import-ldif command, which is used to replace all data in a backend with data loaded from a specified LDIF file (or set of LDIF files), the encryption-settings import command merge new definitions into the encryption settings database, and any existing definitions that aren’t in the file being imported will be retained.

Creating a New Encryption Settings Definition

If you want to create a new encryption settings definition, use the encryption-settings create command. The most important arguments offered by this command include:

  • --cipher-algorithm — A required argument that specifies the name of the base encryption algorithm that should be used when encrypting data with this definition. Although you can technically use any cipher algorithm that the JVM supports, the only one that we currently recommend for use is “AES”.
  • --cipher-transformation — An optional argument that specifies the full cipher transformation (including the mode and padding algorithm) that should be used when encrypting data with this definition. When using the AES cipher algorithm, we recommend either “AES/CBC/PKCS5Padding” or “AES/GCM/NoPadding” (the latter of which offers somewhat better security, as it offers better integrity protection). If you don’t specify a cipher transformation when using the AES algorithm, a default of “AES/CBC/PKCS5Padding” will be used.
  • --key-length-bits — A required argument that specifies the length of the encryption key to be generated. When using AES, allowed key lengths are 128, 192, and 256 bits.
  • --key-factory-iteration-count — An optional argument that specifies the number of PBKDF2 iterations that should be used when deriving the encryption key from the passphrase that backs the definition. If this is not specified, then the tool will use a default value that depends on whether definition is backend by a known passphrase or a randomly generated one. If it’s backed by a randomly generated passphrase, then we use the OWASP-recommended 600,000 iterations. If it’s backed by a generated passphrase, then we use a smaller iteration count of 16,384 to preserve backward compatibility with older versions so that supplying the same passphrase and all other settings will be able to consistently reproduce the same definition. As such, if you’re using a known passphrase and don’t need to worry about definitions created in older versions, we recommend that you explicitly specify the iteration count for better protection of the derived key.
  • --prompt-for-passphrase — An optional argument that indicates that the tool should interactively prompt for the passphrase to use to create the definition. At most one of the --prompt-for-passphrase and --passphrase-file arguments may be provided, and if neither is provided, then the definition will be backed by a randomly generated passphrase.
  • --passphrase-file — An optional argument that indicates that definition should be backed by a passphrase read from the specified file. At most one of the --prompt-for-passphrase and --passphrase-file arguments may be provided, and if neither is provided, then the definition will be backed by a randomly generated passphrase.
  • --set-preferred — An optional argument that indicates that the newly created definition should be set as the preferred definition for new encryption operations. If this is not specified, then the new definition will only be set as preferred if it’s the first one in the database; if there’s already an existing preferred definition, then it will remain the preferred definition.
  • --description — An optional argument that specifies a human-readable description to use for the definition. If this is not specified, then the definition will not have a description.

For example:

$ bin/encryption-settings create \
     --cipher-algorithm AES \
     --cipher-transformation AES/GCM/NoPadding \
     --key-length-bits 256 \
     --key-factory-iteration-count 600000 \
     --prompt-for-passphrase \
     --set-preferred

After creating a new definition, we strongly recommend using encryption-settings export to back up the resulting definitions to a passphrase-protected file so that you have a backup of that and any other definitions, and then import those definitions into the other servers in the topology. Alternatively, if you created the new definition with a known passphrase, then you should be able to issue the same command with the same passphrase on the other instances to generate the same definition in those servers.

Deleting an Encryption Settings Definition

My first recommendation for deleting an encryption settings definition is: don’t. There’s no harm in keeping a definition around that isn’t being used. On the other hand, if you delete a definition, then anything encrypted with it (whether data in the database, or the LDAP changelog or replication changelog, or in encrypted files) will become inaccessible. If the server tries to interact with data that was encrypted with an encryption settings definition and that definition is available, then it will at best encounter an error, and in some cases the server may not be able to start.

As long as you create a new preferred encryption settings definition, then the server should start using it to encrypt new data. If you want to export and re-import any backends containing encrypted data, then that data will be automatically re-encrypted with the new definition. This includes the LDAP changelog as well, although any existing records encrypted with the old definition will eventually be purged as appropriate based on the server configuration. Old records in the replication database will also be purged over time, but if you want to start fresh with a new definition, then you can disable and re-enable replication. If there are any files that are encrypted with the old definition, then you can decrypt and re-encrypt them with the new definition.

If you really want an old encryption settings definition gone, then the best way to do that safely would probably be to set up a new instance with the desired new definition and migrate the data over. In particular:

  1. Add the new definition to the existing instance and make it preferred.
  2. Export the data to an LDIF file that is encrypted with the new definition.
  3. Set up a new server instance with data encryption enabled and only using the new definition (or all definitions you want to preserve but excluding those you want to get rid of).
  4. Import the data from LDIF.

But if you are absolutely confident that an encryption settings definition isn’t in use anymore and want to remove it from an encryption settings database, then we first strongly recommend ensuring that you have a backup of that definition created with the encryption-settings export command so that you can restore it if necessary. Then, you can get rid of it with the encryption-settings delete command, using the --id argument to specify the ID of the encryption settings definition that you want to remove. I strongly recommend doing this on a test instance first and verify that everything still works after a restart before trying it on anything in production.

Changing the Preferred Encryption Settings Definition

If you already have multiple encryption settings definitions in the database and want to change which one is preferred for new encryption operations, then you can use the encryption-settings set-preferred command with the --id argument to specify the ID of the existing definition that you want to make preferred.

Note that while you can create a new definition and make it preferred in a single command, it may be better to create a new definition that is initially non-preferred, and make sure it is defined across all of the instances before making it the new preferred definition. Although most forms of data encryption only protect the data locally and not when it’s replicated to other instances (we use TLS to protect the data in transit, and then the recipient server’s data encryption configuration to protect it when it’s stored there), there are some cases in which we store encrypted data within the entry itself. For example, if you use the AES256 password storage scheme, the encoded representation of the password will be encrypted with an encryption settings definition, and it won’t be possible to authenticate in other instances until they have been updated with the new definition. By ensuring that the definition is available in all instances before setting it preferred, and then setting it as preferred in all instances, you won’t have to worry about the possibility of instances encountering data encrypted with definitions they don’t have.

Managing Data Encryption in the Server Configuration

As previously stated, the best time to set up data encryption is when you set up the server. However, it is possible to enable encryption after the fact, and there are also other options that you can configure. Some of the encryption-related configuration properties include:

  • The encrypt-data property in the global configuration controls whether data encryption is enabled in the server. Note that you won’t be able to enable data encryption if you haven’t created any encryption settings definitions, and you won’t be able to disable data data encryption if the prevent-disabling-data-encryption restriction is in effect.
  • The encryption-settings-cipher-stream-provider property in the global configuration controls which cipher stream provider is used to protect the encryption settings database. If you’re going to change the cipher stream provider with data encryption enabled, then you need to do so with the server online so that it can automatically re-encrypt the database with the new provider. You won’t be able to change the active cipher stream provider if the prevent-changing-cipher-stream-provider restriction is in effect.
  • The encrypt-backups-by-default property in the global configuration controls whether the server will automatically encrypt backups, even if you don’t use the --encrypt argument. This will be set to true by default if you enable data encryption during setup, and you’ll have to use the --doNotEncrypt argument to create an unencrypted backup (which won’t be allowed if the prevent-unencrypted-backup restriction is in effect).
  • The backup-encryption-settings-definition-id property in the global definition allows you to explicitly specify which definition should be used to encrypt backups by default. If this is not specified, the server’s preferred definition will be used.
  • The encrypt-ldif-exports-by-default property in the global configuration allows you to indicate whether LDIF exports will be encrypted by default (in which case you need to use the --doNotEncrypt argument to create an unencrypted export, which won’t be allowed if the prevent-unencrypted-ldif-export restriction is in effect). This will be set to true by default if data encryption is enabled during setup.
  • The ldif-export-encryption-settings-definition-id property in the global configuration allows you to specify which definition should be used to encrypt LDIF exports by default. If this is not specified, the server’s preferred definition will be used.
  • The automatically-compress-encrypted-ldif-exports property in the global configuration can be used to whether LDIF exports should also be gzip-compressed if they are encrypted. This is set to true by default.
  • The AES256 password storage scheme is a reversible scheme that encrypts user passwords with the passphrase that backs an encryption settings definition (even if that definition doesn’t normally use 256-bit AES). We strongly recommend using non-reversible schemes to encode user passwords for better security, but if you have a legitimate need to store passwords in a reversible form, then the AES256 scheme is currently the best option. By default, it will use the preferred encryption settings definition, but you can specify an alternative definition with the encryption-settings-definition-id configuration property.
  • The backup recurring task provides a number of options that allow you to control whether recurring backups are encrypted, and if so whether they are encrypted with an encryption settings definition or a passphrase.
  • The signing-encryption-settings-id property in the crypto manager configuration can be used to indicate which encryption settings definition should be used to generate digital signatures if signing is enabled (e.g., for signed log files). By default, digital signatures will be generated using the preferred encryption settings definition.
  • The encrypt attribute values plugin provides a way of encrypting the values for a specified set of attributes, and they will appear in encoded form when retrieved by clients. Note that this is only useful for a limited subset of attributes that may used to hold secret information that the server needs to have in the clear, but that shouldn’t be exposed to clients (e.g., one-time passwords).
  • The LDIF export recurring task provides essentially the same encryption-related options as the backup recurring task.
  • Loggers that write to files provide an option for encrypting the log file (and if so, with either a specified encryption settings definition or the server’s preferred definition). They also support signed logging, and signatures are also generated using an encryption settings definition.

The encrypt-file Tool

In many cases, the Directory Server and its associated tools support reading data from encrypted files. We don’t support encrypting the configuration itself, since we need to be able to read it to get the information needed to instantiate the cipher stream provider, but most other files can be encrypted. This includes things like:

  • Files containing the passphrase needed to access certificate key and trust stores
  • Files containing the passphrase to use in LDAP bind requests
  • Properties files used to provide default values for command-line tool arguments (e.g., tools.properties)
  • LDIF files for use with tools like import-ldif, ldapmodify, ldifsearch, ldifmodify, and ldif-diff.

In addition, the server may write encrypted files for a number of purposes, including encrypted backups, LDIF exports, and log files.

Some of these files are automatically encrypted when you set up the server with data encryption enabled. This includes the config/ads-truststore.pin, config/keystore.pin, and config/truststore.pin files that contain the passphrases needed to access certificate key and trust stores. And although it doesn’t automatically encrypt the config/tools.properties file, it will encrypt the config/tools.pin file if you use the --populateToolPropertiesFile argument with a value of bind-password.

If you would like to encrypt other files for use by the server, then you can use the encrypt-file tool. Files can be encrypted with either an encryption settings definition or a passphrase, and you can also use the tool to decrypt files (although that won’t be allowed if the prevent-decrypt-file restriction is in effect).

Some of the arguments supported by the encrypt-file tool include:

  • --decrypt — Indicates that the input data should be decrypted rather than encrypted.
  • --input-file — Specifies the path to the plaintext file whose contents are to be encrypted (or the path to the encrypted file to be decrypted). If this isn’t specified, then the input data will be read from standard input.
  • --output-file — Specifies the path to which the encrypted (or decrypted) output should be written. If this isn’t provided, then the output data will be written to standard output.
  • --encryption-settings-id — The ID of the encryption settings definition that should be used to encrypt the file. At most one of the --encryption-settings-id, --prompt-for-passphrase, and --passphrase-file arguments may be provided, and if none of them are given, then the file will be encrypted with the preferred encryption settings definition. Note that this argument should not be provided in conjunction with the --decrypt argument, because the encryption header of an encrypted file will indicate which definition was used to encrypt it.
  • --prompt-for-passphrase — Indicates that the encryption key should be generated from a provided passphrase rather than an encryption settings definition, and that the tool should interactively prompt for that passphrase.
  • --passphrase-file — Indicates that the encryption key should be generated from a provided passphrase rather than an encryption settings definition, and that passphrase should be read from a specified file.
  • --decompress-input — Indicates that the input file contains gzip-compressed data. When encrypting data, decompression will be performed before encryption. When decrypting data, decompression will be performed after decryption.
  • --compress-output — Indicates that the output file should be gzip-compressed. When encrypting data, compression will be performed after encryption. When decrypting data, compression will be performed after decryption.

For example, to encrypt a file named “clear.input” to “encrypted.output” using the server’s preferred encryption settings definition, you can use a command like the following:

$ bin/encrypt-file \
     --input-file /path/to/clear.input \
     --output-file /path/to/encrypted.output

And then to decrypt it:

$ bin/encrypt-file \
     --decrypt
     --input-file /path/to/encrypted.output \
     --output-file /path/to/clear.input

Monitoring Cipher Stream Provider Availability

Protecting the encryption settings database with a cipher stream provider that relies on an external service can add a layer of security to the Directory Server in that it makes it more difficult for an attacker who gains access to the system to get at the underlying encryption keys. However, it also introduces a risk because if that external service, or necessary information within that service, becomes unavailable, then it could adversely affect the server’s ability to function properly.

For example, if you’re using the Amazon Key Management cipher stream provider, then you need to be aware of at least the following potential risks:

  • An outage in the KMS service itself, or in your ability to reach the service
  • Your AWS account becomes unavailable
  • The KMS key that the cipher stream provider relies on is removed or revoked

Any of those issues will prevent the server from being able to open the encryption settings database. This will prevent the server from starting, and it will also prevent you from being able to run tools that require access to the encryption settings database (e.g., to interact with encrypted data in the database, or to read or write an encrypted file). This won’t directly interfere with a server that’s already running because it caches the information it needs to interact with the encryption settings database on startup, although it can inhibit its ability to invoke certain administrative tasks that involve spawning a separate process, like invoking an LDIF export task.

You probably want to be made aware of any outages that might affect the ability to access the encryption settings database as quickly as possible. To help with that, we offer a monitor provider that will periodically verify that the server can open the encryption settings database without relying on any cached data. This is the Encryption Settings Database Accessibility monitor provider, and it offers the following configuration properties:

  • check-frequency — This indicates how frequently the server should check the encryption settings database accessibility. By default, it will check every five minutes.
  • prolonged-outage-duration — This specifies the length of time required for an outage to be considered prolonged. By default, an outage will be considered prolonged once it has lasted for at least twelve hours.
  • prolonged-outage-behavior — This specifies the behavior that the server should take once it decides that the outage is prolonged. You may wish to have the server take an additional action in the event of a prolonged outage, as will be discussed below. Supported values include:

    • none — Don’t take any additional action when the outage becomes prolonged
    • issue-alert — Generate one additional encryption-settings-database-prolonged-outage administrative alert when the outage becomes prolonged
    • enter-lockdown-mode — Place the server in lockdown mode once the server becomes prolonged
    • shut-down-server — Shut down the server once the outage becomes prolonged

When the monitor provider is active and an outage is detected, the server will generate an encryption-settings-database-inaccessible administrative alert and raise an alarm. Once the outage has been resolved and the database is accessible again, then the server will clear the alarm and issue an encryption-settings-database-access-restored alert.

The primary purpose behind the monitor’s support for taking action after a prolonged outage is to support a case in which the encryption settings database is protected by a service that is managed by a different set of people than those that manage the Directory Server itself, and there may be a legitimate reason for revoking access to the encrypted data. This use case is covered in the next section.

Maintaining a Separation of Duties Between Data Encryption Management and Server Management

As hinted at in the end of the previous section, there may be cases in which the people who manage the Directory Server are different from the people who are responsible for the data contained in it. For example, this may be the case if one organization is hosting the server on behalf of another. Alternatively, it may be the case that the data contained in the server is considered sensitive and access to it should be limited. In such cases, there may be a good reason to limit the amount of access that those responsible for administering the server have to the data contained in that server.

A substantial portion of this can be achieved through a combination of four features that were introduced in the 9.3 release:

  • The ability to impose data encryption restrictions
  • The ability to freeze the encryption settings database
  • The ability to set up the server with a pre-existing encryption settings database
  • The ability to monitor encryption settings database accessibility and take action if that access is revoked

In particular, the organization responsible for the data could do the following:

  1. Set up a temporary Directory Server instance and use it to create an encryption settings database that has an appropriate set of definitions and that is protected with the desired cipher stream provider.
  2. Create a passphrase-protected export of those definitions so that they are backed up for disaster recovery purposes.
  3. Impose a complete set of data encryption restrictions on the encryption settings database.
  4. Freeze the encryption settings database with a passphrase.

At that point, they could provide the following files to the server administrators:

  • The locked-down encryption settings database (the config/encryption-settings/encryption-settings-db file)
  • A dsconfig batch file to use to set up and activate the cipher stream provider used to protect the encryption settings database
  • Any additional metadata files that the cipher stream provider might need to access the encryption settings database (e.g., for the KMS cipher stream provider, this would be the config/encryption-settings-passphrase.kms-encrypted file).

That temporary Directory Server instance can then be destroyed, as it is no longer needed. However, the encryption settings definition export must be reliably and securely backed up, making sure to take note of the passphrase used to protect the export and the passphrase used to freeze the encryption settings database.

The Directory Server administrators can then create a server profile that will set up the server with that encryption settings database. Among all of the other things that would normally go in the server profile (e.g., to apply the desired configuration, include files in the server root, define JVM arguments, etc.), that profile would need to include the following:

  • In addition to all other appropriate arguments to use when setting up the server, the setup-arguments.txt file needs to include the --encryptDataWithPreExistingEncryptionSettingsDatabase argument.
  • The dsconfig batch file(s) needed to set up the cipher stream provider should go in the pre-setup-dsconfig directory. At present, the only configuration changes that should go in this directory are those needed to set up the cipher stream provider. Any other configuration changes that may need to be applied should go in the dsconfig directory so that they are applied after setup has completed.
  • The encryption settings database should be included as the server-root/pre-setup/config/encryption-settings/encryption-settings-db file.
  • Any metadata files needed by the cipher stream provider should also go in the appropriate locations below the server-root/pre-setup directory structure. For example, for the KMS cipher stream provider, that would be server-root/pre-setup/config/encryption-settings-passphrase.kms-encrypted.

If it is desirable to monitor the accessibility of the encryption settings database and potentially take action if it becomes unavailable, then the dsconfig directory should include a batch file with the necessary configuration to set up that monitor. For example:

dsconfig create-monitor-provider \
     --provider-name "Encryption Settings Database Accessibility" \
     --type encryption-settings-database-accessibility \
     --set enabled:true \
     --set "prolonged-outage-duration:8 h" \
     --set prolonged-outage-behavior:shut-down-server

After using manage-profile setup to set up the server with this profile, the server will have data encrypted with the definitions created by the first organization, but in a way that prevents server administrators from exporting those definitions, disabling data encryption, changing the cipher stream provider, exporting the data in the clear or with a known passphrase, or decrypting an encrypted LDIF export.

Note that these restrictions don’t have any effect on an administrator’s ability to access the data over LDAP. However, that could potentially be restricted through a number of other mechanisms (e.g., access controls, client connection policy restrictions, sensitive attribute configuration, etc.), and at the very least, such access could be audited through access logs.

Ping Identity Directory Server 9.3.0.0

We have just released version 9.3.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Summary of New Features and Enhancements

  • Added support for data encryption restrictions [more information]
  • Added the ability to freeze the encryption settings database [more information]
  • Added the ability to set up the server with a pre-existing encryption settings database [more information]
  • Added support for monitoring the availability of the encryption settings database [more information]
  • Added other data encryption improvements [more information]
  • Added an aggregate pass-through authentication handler [more information]
  • Added a PingOne pass-through authentication handler [more information]
  • Improved dsreplication performance in topologies with a large number of servers and/or high network latency between some of the servers
  • Added more options for allowing pre-encoded passwords [more information]
  • Added the ability to use the proxied authorization v1 or v2 request control in password modify extended requests
  • Updated the Directory REST API to provide support for the password modify, get password quality requirements, and suggest password extended operation types [more information]
  • Added a disallowed characters password validator [more information]
  • Added a UTF-8 password validator [more information]
  • Added the ability to include ds-pwp-modifiable-state-json in add operations [more information]
  • Added the ability to automatically apply changes to TLS protocol and cipher suite configuration [more information]
  • Added new account-authenticated and account-deleted account status notification types [more information]
  • Added configuration properties for managing the configuration archive [more information]
  • Added a new replication-missing-changes-risk alert type [more information]
  • Added a new replication-not-purging-obsolete-replicas alert type [more information]
  • Added a new check-replication-domains tool that can list known replication domains identify any that may be obsolete
  • Added a --showPartialBacklog argument to dsreplication status
  • Added the ability to synchronize Boolean-valued attributes to the PingOne sync destination
  • Updated replace-certificate to support obtaining new certificate information from PEM files
  • Added support for encrypted PKCS #8 private keys [more information]
  • Added caching support to the PKCS #11 key manager provider [more information]
  • Added the ability to specify the start and end times for the range of log messages to include in collect-support-data archives when invoking the tool as an administrative task

Summary of Bug Fixes

  • Fixed an issue when modifying ds-pwp-modifiable-state-json with other attributes [more information]
  • Fixed an issue that could prevent the server from properly building indexes with very long names
  • Fixed an issue that could cause the server to omit matching entries when configuring compact-common-parent-dn values [more information]
  • Fixed an issue in which failover may not work properly after updating a Synchronization Server instance with manage-profile replace-profile
  • Fixed an issue with replace modifications for attributes containing variants with options [more information]
  • Improved support for passwords containing characters with multiple encodings [more information]
  • Fixed an issue that could prevent obsolete replicas from being automatically purged in certain circumstances
  • Fixed an issue that could prevent the servers in a replication topology from being able to select the authoritative server for maintaining information in the topology registry
  • Increased timeouts used by the dsreplication tool to reduce the chance that they would be incorrectly encountered when interacting with a large replication topology
  • Fixed an issue that caused the Directory REST API to always include the permissive modify request control when updating entries
  • Improved access control behavior for the password policy state extended operation [more information]
  • Fixed an issue in which subtree searches based at the server’s root DSE could omit entries from backends with base DNs subordinate to those of other backends
  • Fixed an issue that could prevent a user from using grace logins to change their own password in a modify request that contained the proxied authorization request control
  • Fixed an issue with substring filters containing logically empty substrings [more information]
  • Improved error handling when using automatic authentication with client certificates [more information]
  • Improved Directory Proxy Server error handling when using the rebind authorization method [more information]
  • Fixed an issue that prevented including permit-export-reversible-passwords privilege in the default set of root privileges
  • Fixed an issue that could cause manage-profile setup to complain about being unable to find certain utilities used by the collect-support-data tool
  • Fixed an error that could occur if an archived configuration file was removed in the middle of an attempt to back up the config backend
  • Fixed an issue that prevented the Directory Proxy Server from logging search result entry messages for entries passed through from a backend server
  • Fixed an issue when synchronizing account state from Active Directory when using modifies-as-creates
  • Suppressed servlet information in HTTP error messages by default
  • Restricted the RSA key size for inter-server certificates to a maximum of 3072 bits
  • Fixed an issue with base DN case sensitivity when enabling replication with a static topology
  • Changed the result code used when rejecting an attempt to change a password that is within the minimum age from 49 (invalidCredentials) to 53 (unwillingToPerform)
  • Fixed an issue that could cause the server to return multiple password validation details response controls in the response to a password modify extended request
  • Fixed an issue that could prevent the server from returning a generated password for a password modify extended operation processed with the no-operation request control

Encryption Settings Database Improvements

We have made a set of changes to the way that the server manages and interacts with the encryption settings database. When used in combination, this can allow for a separation of duties between those responsible for managing the Directory Server itself and those responsible for managing data encryption, which could be used to limit the access that server administrators have to encrypted data. However, even in environments where this strict separation of duties is not required, these changes can still substantially improve the overall security of the directory environment.

These changes come in the form of four key enhancements:

  • We have introduced the ability to impose restrictions on the ways that administrators can interact with encrypted data. These restrictions are defined in the encryption settings database itself and can prevent administrators from doing any or all of the following:

    • Disabling data encryption
    • Changing the cipher stream provider used to protect the encryption settings database
    • Exporting the encryption settings definitions
    • Exporting or backing up backend data in unencrypted form
    • Exporting or backing up backend data in a form that is encrypted with a supplied passphrase rather than an encryption settings definition
    • Using the encrypt-file tool to decrypt files

  • We have added the ability to freeze the encryption settings database with a passphrase. If the encryption settings database has been frozen, then no changes may be made to it, including creating, importing, or removing definitions, changing which is the preferred definition, and altering the set of data encryption restrictions. If any changes are needed, then the encryption settings database may be un-frozen using the same passphrase that was initially used to freeze it.

  • We have added the ability to set up the server with a pre-existing encryption settings database. This database should already have the desired set of definitions, and it may be configured with data encryption restrictions and frozen so that no changes will be allowed. This database will also be tied to a specific cipher stream provider used to protect its contents. To set up the server with a pre-existing encryption settings database, you should use manage-profile setup with a server profile that has the following characteristics:

    • The setup-arguments.txt file must contain the new --encryptDataWithPreExistingEncryptionSettingsDatabase argument.
    • The configuration changes needed to set up the associated cipher stream provider must be included in dsconfig batch files contained in the pre-setup-dsconfig directory.
    • The encryption settings database itself, along with any metadata files that the cipher stream provider might need, should be included in the appropriate locations below the server-root/pre-setup directory.
  • We have added support for a new monitor provider that can periodically ensure that the server can read the contents of the encryption settings database without relying on any caching that the cipher stream provider may normally use to improve performance and reliability. Not only does this offer better overall monitoring for the health of the server, but it can also be used to take action if the information that it needs to interact with the encryption settings database has been disabled. For example, if the cipher stream provider relies on an external key or secret (e.g., from Amazon Key Management Service, Amazon Secrets Manager, Azure Key Vault, CyberArk Conjur, or HashiCorp Vault) to be able to unlock the encryption settings database, and that key or secret has been intentionally removed or revoked, then it may be desirable to take action to limit the server’s ability to access encrypted data, like entering lockdown mode or shutting down entirely.

When all of these are combined, and the encryption settings database is protected by a cipher stream provider that relies on an external service, the server administrators could be able to maintain the server with data encryption enabled, but with substantially restricted access to the data that it contains. Although it’s not yet available as an option, this could be useful in environments like PingOne Advanced Services, where Ping personnel can manage a Directory Server deployment on behalf of an organization with limited access to that organization’s data.

Other Data Encryption Improvements

We have made a number of other additional improvements in the server’s support for data encryption. These include:

  • We added a new --key-factory-iteration-count argument to the encryption-settings create command to make it possible to specify the PBKDF2 iteration count to use for the new definition. When creating a new encryption settings definition that is backed by a randomly generated secret, the server will now default to using an iteration count of 600,000 in accordance with the latest OWASP guidelines.
  • We updated most cipher stream providers to make it possible to explicitly specify the PBKDF2 iteration count that they use when deriving the key used to protect the encryption settings database. They also now use a higher default iteration count of 600,000 when creating a new database.
  • We updated the file-based cipher stream provider to support using a separate metadata file with additional details about the encryption that it uses to protect the encryption settings database. When setting up a new instance of the server in a manner that uses this cipher stream provider, a metadata file will be automatically generated to allow it to use stronger encryption to protect the database than has been used in the past.
  • We have improved the strength of the encryption used to protect encryption settings exports, backups, LDIF exports, encrypted log files, and other types of file encryption. We now prefer 256-bit AES over 128-bit AES when it’s available, and we use a higher PBKDF2 iteration count to protect the key.
  • We have improved the performance of file encryption and decryption operations performed by the server in the common case in which the encryption uses an encryption settings definition rather than a separate passphrase. Although the server (or standalone tools that need to access encryption settings definitions) may take a little longer to start up with the stronger encryption settings that are in place, the use of caching should dramatically reduce the cost of subsequent encryption and decryption operations.
  • We updated the encryption settings backend to provide additional information about the definitions contained in the encryption settings database. The base entry for the backend will also indicate whether the encryption settings database is frozen and/or configured with any data encryption restrictions.
  • We updated setup so that if it is configured to generate a tools.pin file with the default password that command-line tools may use to authenticate to the server, that password will now be automatically encrypted if data encryption is enabled.

New Pass-Through Authentication Handlers

The Directory Server has long supported the ability to use pass-through authentication, in which a bind attempt against a local entry may ultimately be forwarded to an external service to verify the credentials, optionally setting those credentials if they are confirmed to be correct.

Initially, pass-through authentication was only supported for other LDAP servers. Later, we introduced support for passing through authentication attempts to PingOne. In the 9.0 release, we introduced a new pluggable pass-through authentication plugin that made it possible to use the Server SDK to develop custom authentication handlers that could be used to target other types of services.

We have only ever supported a single instance of the pass-through authentication plugin (of any type) active in the server at once. In the 9.3 release, we are introducing a new aggregate pass-through authentication handler for use with the pluggable pass-through authentication plugin. This aggregate handler allows you to configure pass-through authentication so that it can support multiple external services, of the same or different types. You can now configure each of the pass-through authentication handlers to indicate which types of bind requests they support, and you can optionally attempt to use multiple handlers for the same bind operation under certain circumstances.

We have also added a new PingOne pass-through authentication handler that works in conjunction with the pluggable pass-through authentication plugin to allow passing through bind requests to PingOne. This handler offers essentially the same functionality as the PingOne pass-through authentication plugin, but the new PingOne handler allows it to be used in conjunction with the aggregate pass-through authentication handler so that pass-through authentication attempts can use PingOne in addition to other services.

More Options for Allowing Pre-Encoded Passwords

By default, the server does not allow clients to set pre-encoded passwords. This is something that we made it possible to disable via the allow-pre-encoded-passwords configuration property in the password policy, but we strongly discourage that because the server can’t perform any validation for pre-encoded passwords. A client could use this to set a password that doesn’t meet the server’s password strength requirements. And because most password storage schemes have many different ways of encoding the same password (which is important to protect against attacks using precomputed password dictionaries), this could also be exploited to allow a user to continue using the same password indefinitely, in spite of password expiration or other related settings.

The biggest risk to allowing pre-encoded passwords lies in self password changes rather than administrative password reset. Previously, administrators could override the server’s prohibition against pre-encoded passwords in one of two ways:

  • If they are authorized to use the password update behavior request control, then they can use it to allow or reject pre-encoded passwords on a per-operation basis.
  • If they have the bypass-pw-policy privilege, then they will be allowed to set pre-encoded passwords for other users (and do other things that the password policy configuration may prevent by default).

Previously, the allow-pre-encoded-passwords configuration property only offered two values: false (the default setting, in which the server would not allow clients to set pre-encoded passwords) or true (in which the server would allow any client to set pre-encoded passwords). In the 9.3 release, we are making it possible to use three additional values for this property:

  • add-only — Indicates that the server will allow administrators to include pre-encoded passwords in add requests, but will continue to reject pre-encoded passwords for both self password changes and administrative password resets.
  • admin-reset-only — Indicates that the server will allow administrators to perform an administrative password reset with a pre-encoded password, but will continue to reject pre-encoded passwords in add requests and for self password changes.
  • add-and-admin-reset-only — Indicates that the server will allow administrators to include pre-encoded passwords in add requests and when performing administrative password resets, but will continue to reject pre-encoded passwords in self password changes.

In cases where an application may have a legitimate need to be able to set pre-encoded passwords for users, but it’s not feasible to update the application to use the password update behavior request control or to give the account is use the bypass-pw-reset privilege, these new options may make it possible for that application to set pre-encoded passwords for users while still preventing them for self password changes.

Directory REST API Support for More Password Operations

Updated the Directory REST API to add support for equivalents to the following LDAP extended operations:

  • The standard password modify extended operation, which makes it possible to perform a self password change or an administrative password reset. Although you could previously change a user’s password by updating the password attribute, this operation offers a number of advantages, including:

    • You don’t need to know which attribute is used to store passwords in the target user’s entry.
    • There’s a dedicated field for supplying the user’s current password, which the password policy may require for self password changes.
    • You can optionally omit a new password and have the server automatically generate a new one and return it in the response.
    • It can be used in conjunction with a password reset token to allow a user to perform a self password change in cases where their account may be otherwise unusable.
  • The proprietary get password quality requirements extended operation, which can be used to obtain a list of the requirements that new passwords will be required to satisfy, in both machine-parsable and human-readable forms.
  • The proprietary generate password extended operation (which is called “suggest passwords” in the Directory REST API), which can be used to cause the server to generate suggested new passwords for a user.

New Password Validators

We have added a new disallowed characters password validator that makes it possible to reject passwords that contain any of a specified set of characters. You can define characters that are not allowed to appear anywhere in a password, as well as characters that are not allowed to appear at the beginning and/or the end of a password.

We have also added a new UTF-8 password validator that can be used to ensure that only passwords that are provided as valid UTF-8 strings will be allowed. You can optionally choose to limit passwords to only containing ASCII character or to allow non-ASCII characters, and you can also specify which classes of characters (e.g., letters, numbers, punctuation, symbols, spaces, etc.) should be allowed.

Support for ds-pwp-modifiable-state-json in Add Operations

If you enable the Modifiable Password Policy State Plugin in the server, then you can use the ds-pwp-modifiable-state-json operational attribute to set certain aspects of a user’s password policy state, including:

  • Whether the account is administratively disabled
  • Whether the account is locked as a result of too many failed authentication attempts
  • Whether the account is in a “must change password” state
  • The password changed time
  • The account activation time
  • The account expiration time
  • The password expiration warned time

Previously, the ds-pwp-modifiable-state-json attribute could only be used in a modify operation to alter the password policy state for an existing user. As of the 9.3 release, we now also allow it to be used in an add operation to specify state information for the account being created.

Fix Database Ordering With compact-common-parent-dn Values

The compact-common-parent-dn configuration property can be used to reduce the amount of disk space consumed by the database by tokenizing portions of entry DNs that lots of entries have in common. For example, in a database in which most of the entries are below “ou=People,dc=example,dc=com”, defining that as a compact-common-parent-dn value could reduce the size needed to store DNs of entries below that by up to 26 bytes. And this compaction can happen not only in the encoded representation of the entry, but also in an internal index that we use to map the DNs of entries to the identifier of the record that contains the data for that entry, so that can double the space savings.

Unfortunately, we discovered a bug in the way that the maintained that DN-to-ID database when custom compact-common-parent-dn values were specified. This bug could only appear under very specific circumstances, including:

  • When one or more compact-common-parent-dn values were specified that are two or more levels below the base DN for the backend.
  • When processing an unindexed search in which the base DN for the search is below the base DN for the backend, but above one or more of the compact-common-parent-dn values.

In such cases, the search could have incorrectly omitted entries from the search results that were below those compact-common-parent-dn values. This was due to an issue with the way that we ordered records in that DN-to-ID database.

We have fixed the problem in the 9.3 release. However, because the issue relates to the order in which records were stored in the database, if you are affected by this problem, then you will need to export the contents of the database to LDIF and re-import it. This isn’t something that you need to do unless you have configured compact-common-parent-dn values that are at least two levels below the base DN for the backend.

A Fix for Modifying ds-pwp-modifiable-state-json With Other Attributes

When we initially introduced support for the ds-pwp-modifiable-state-json operational attribute, we did not allow altering it in conjunction with any other attribute in the user’s entry. We also included an optimization in the plugin that handles that attribute so that if the requested change did not actually alter the user’s password policy state (e.g., because the new value only attempted to set state properties to values that they already had), the plugin would tell the server to skip much of the normal processing for that modify operation.

We have since updated the server to allow you to alter other attributes (except the password attribute) in the same request as one that modifies ds-pwp-modifiable-state-json. However, in doing so, we neglected to update the plugin so that it no longer skipped the remainder of the core modify operation processing if the ds-pwp-modifiable-state-json update did not result in any password policy state changes but there were still other, unrelated changes that should be applied. In such cases, the server could have failed to apply changes to those other attributes. This has been fixed, and other modifications in the request will still be processed even if a change to ds-pwp-modifiable-state-json does not alter the user’s password policy state.

Automatically Apply Configuration Changes to TLS Protocols and Cipher Suites

By default, the server automatically selects an appropriate set of TLS protocols and cipher suites that it will use for secure communication. This default configuration should provide a good level of security that avoids options with known weaknesses and that don’t support forward secrecy while still remaining compatible with virtually any client from the last fifteen years.

However, the server does allow you to explicitly configure the set of TLS protocols and/or cipher suites if you have a need to do so. Previously, making any such changes required you to either restart the affected connection handler or the entire server for them to actually take effect. As of the 9.3 release, these changes will now automatically take effect for the LDAP connection handler so that any new secure sessions that are negotiated after the change is made will use the updated settings.

Note that this is currently only supported for the LDAP connection handler. It is still necessary to perform a restart to make the change take effect for other types of connection handlers (like the HTTP or JMX handlers), or to make the change take effect for replication.

New Account Status Notification Types

We have added a new account-authenticated account status notification type that can be used to generate an account status notification any time a user authenticates with a bind operation that matches the criteria contained in an account status notification handler’s account-authentication-notification-result-criteria configuration property.

We have added a new account-deleted account status notification type that can be used to generate an account status notification any time a user’s account is removed with a delete operation that matches the criteria contained in an account status notification handler’s account-deletion-notification-request-criteria configuration property.

Fix a Replace Issue for Modifies of Attributes With Options

We have fixed an issue that could cause the server to behave incorrectly when processing a replace modification for an attribute that has some variants with attribute options in the target entry. If the replace modification does not include any values for the target attribute, the server would have previously removed all variants of the attribute from the entry, including those with and without attribute options. It will now correctly only remove the variant without any attribute options.

Improved Support for Passwords Containing Characters With Multiple Encodings

Unicode is an international standard that defines the set of characters that computers are intended to support. This includes a wide range of characters encompassing most written languages on earth, including not only the core set of ASCII characters used in English, but also characters with diacritical marks used in many Latin-based languages, non-Latin symbols like those used in Chinese hanzi and Japanese kanji, and even emojis.

In some cases, Unicode supports multiple ways of encoding the same character. For example, the “ñ” (Latin small letter N with tilde) character can be represented in two different ways:

  • As the single Unicode character U+00F1
  • As a lowercase ASCII letter n followed by Unicode character U+0303, which is a special combining mark that basically means “put a tilde over the previous character”

Previously, when encoding passwords, the Directory Server would always encode them using the exact set of UTF-8 bytes provided by the client in the request used to set that password, and the server would always use the exact set of UTF-8 bytes included in a bind request as a way of attempting to verify whether the provided password was correct. This approach works just fine in the vast majority of cases, but it has the potential to fail in circumstances where the password contains characters that have multiple Unicode representations, and the encoding used in the bind request is different from the encoding used when the password was originally set.

As of the 9.3 release, we have improved our support for authenticating with passwords that contain characters with multiple Unicode representations. If a password storage scheme indicates that a given plaintext password could not have been used to create the encoded stored password, but if the provided plaintext password contains one or more non-ASCII characters, then we will check to see if that password may have alternative encodings that could have been used to generate the stored password.

Improved Access Control Behavior for the Password Policy State Extended Operation

The password policy state extended operation provides the ability to retrieve and update a wide range of properties that are part of a user’s password policy state. In the 9.3 release, we have improved the way that it operates to avoid a common pitfall that has caused issues in the past for clients that are subject to access control restrictions.

Previously, whenever the extended operation handler retrieved the entry for the target user, it did so only under the authority of the authenticated user. If that user was subject to access control restrictions and didn’t have the permission to retrieve some or all of the operational attributes that are used to maintain password policy state information in a user’s entry, then the version of the entry that was retrieved would not contain those attributes, even if they were present. When using that entry to construct a view of the user’s password policy state, this might result in a state that is different from what the server would actually use when interacting with that account.

For example, if the target account has been locked as a result of too many failed authentication attempts, but the requester doesn’t have permission to see the attribute used to maintain information about those failed attempts, then the password policy state extended operation could report that the account was not locked even though it was. Note that this did not affect the server’s behavior when actually enforcing the account lockout, but it could still provide misleading or unexpected behavior for the client that issued the password policy state extended request.

As of the 9.3 release, we have changed the extended operation handler’s behavior to avoid this kind of problem. The server will still verify that the requester has permission to retrieve the target user’s account, but it will now re-retrieve that account using an internal root user that is not subject to password policy restrictions. This ensures that the extended operation handler will have access to all of the operational attributes that are needed to construct an accurate representation of the user’s password policy state.

Note that this new behavior really only affects attempts to retrieve information about a user’s password policy state. It does not have any effect on attempts to update that state. If the client attempts to use the password policy state extended operation to make a change to a user’s password policy state, but the requester does not have permission to write to the necessary operational attribute(s) in the target user’s entry, then the update attempt will continue to fail.

Configuration Properties for Managing the Configuration Archive

We have updated the config backend to add support for a few new configuration properties that can help better manage the configuration archive and reduce unnecessary bloat that it may cause. The new properties include:

  • maintain-config-archive — Indicates whether the server should maintain a configuration archive at all. The configuration archive is maintained by default, and disabling it will not remove existing archived configurations.
  • max-config-archive-size — Specifies the maximum number of archived configurations that the server should maintain. By default, this is unlimited. If this is enabled and there are more than the maximum allowed archived configurations, then the oldest configurations will be removed to make room for newer configurations.
  • insignificant-config-archive-base-dn — Specifies base DNs for configuration changes that should not be preserved in the configuration archive. If a configuration change affects only affects entries below one of these base DNs, then it will not result in that change being maintained separately in the configuration archive. By default, we have included a value of “cn=topology,cn=config” so that changes to entries in the topology registry are excluded from the configuration archive. Certain updates to the topology registry, like adding a new replica into the topology, may result in a large number of changes that previously included a lot of bloat in the configuration archive.

Improved Substring Filter Handling for Logically Empty Substrings

Substring search filters are not allowed to contain empty (zero-length) substrings. If a client attempted to process a search with a substring filter that contains an empty substring, the server would have properly rejected it. However, there were cases in which the server did not properly handle substring search filters that contained non-empty substrings that normalized to empty strings (for example, a substring filter that targeted the telephoneNumber attribute in which one of the substrings contained only characters that are considered insignificant when matching telephone number values). The server would have incorrectly considered that normalized-to-empty substring as matching anything rather than nothing, and in some cases, that could cause the search to return unexpected matches. This has been corrected, and substring filters with substrings that normalize to empty values will now properly never match anything.

Improved Error Handling With Automatic Certificate-Based Authentication

By default, whenever a client presents their own certificate chain to the server during TLS negotiation and wants to use that certificate chain to authenticate, it needs to send a SASL EXTERNAL bind request to the server to cause it to perform the appropriate authentication processing. However, LDAP connection handlers offer an auto-authenticate-using-client-certificate configuration property that can cause them to attempt to automatically authenticate a client that presented its own certificate chain as soon as the TLS negotiation completes. Because this automatic authentication attempt happens without any explicit request from the client, there’s no way for the server to indicate whether it completed successfully.

Previously, if an automatic authentication attempt failed, the server would keep the connection alive, but in an unauthenticated state. This could yield unexpected behavior in applications that issued request with the expectation that they had authenticated, only to find those requests rejected due to access control restrictions. As of the 9.3 release, the server will now immediately terminate any client connection that presented its own certificate chain when auto-authenticate-using-client-certificate was set to true but the server was unable to successfully authenticate that client for some reason.

Improved Rebind Error Handling in the Directory Proxy Server

The Directory Proxy Server allows you to set an authorization-method value of rebind, which may be useful in cases where the backend server doesn’t support the intermediate client or proxied authorization request controls (e.g., Active Directory). In such cases, the Directory Proxy Server will remember the credentials that the client used to authenticate, and it will re-bind with those credentials before sending a request that should be authorized as that user.

Previously, if the rebind attempt failed for any reason, the Directory Proxy Server would have immediately reported a failure for the attempt to process a request that should have been authorized as that user. We have updated this behavior so that if the failure suggests that the underlying connection used for that attempt may no longer be valid, the server may re-try the attempt in a different server or on a newly created connection to the same server.

New Replication-Related Alert Types

We have defined a couple of new administrative alert types that the server can use to notify administrators of replication-related concerns:

  • replication-missing-changes-risk — This alert type will be used if the server has developed a replication backlog that is large enough that the server is at risk of missing changes if it can’t catch up.
  • replication-not-purging-obsolete-replicas — This alert type will be used when bringing replication online (most likely during server startup) if the replication-purge-obsolete-replicas configuration property is not set to true. This property is set to true by default as of the 9.2 release, so this primarily applies to older servers that have been updated. We strongly recommend enabling automatic purging of obsolete replicas to reduce unnecessary overhead in replication storage and network traffic, and this alert can be used to ensure that administrators are aware of this recommendation.

Encrypted PKCS #8 Private Keys

We have introduced support for encrypted PKCS #8 private key PEM files. This allows you to make use of private key files that don’t expose the key in the clear. Encrypted private keys require a password to access their contents.

It should be possible to use encrypted private key files anywhere that you can use an unencrypted private key file, including:

  • When importing a certificate chain with manage-certificates import
  • When exporting a private key with manage-certificates export-private-key
  • When setting up the server with a certificate chain and private key obtained from PEM files
  • When using replace-certificate to replace a listener or inter-server certificate with a certificate chain and private key obtained from PEM files

PKCS #11 Caching

The PKCS #11 key manager provider can be used to allow the server to obtain a listener certificate chain from a PKCS #11 token, like a hardware security module (HSM). Whenever the server needs to negotiate a new TLS session with a client, it can access the PKCS #11 token to identify the listener certificate chain that should be used for that processing. This processing may require multiple accesses to the PKCS #11 token. In cases where the PKCS #11 token is remotely accessed over a network, and especially when there is a notable network latency involved in that access, this can have a notable impact on the performance of the TLS negotiation process.

In the 9.3 release, we have introduced a new pkcs11-max-cache-duration property in the PKCS #11 key manager provider configuration. By setting this to a nonzero value, the server can use a degree of caching to eliminate the need for some of the interaction with the HSM, which can dramatically reduce the number of requests that need to be made to the PKCS #11 token.

Note that the use of caching does have a risk incorrect or unexpected behavior if the contents of the PKCS #11 token are altered so that the cached results are no longer accurate. As such, if you decide to enable caching, we recommend temporarily disabling that caching when making changes to the contents of the PKCS #11 token, and then re-enabling the caching once the changes are complete.

UnboundID LDAP SDK for Java 6.0.9

We have just released version 6.0.9 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

As announced in the previous release, the LDAP SDK source code is now maintained only at GitHub. The SourceForge repository is still available for its discussion forum, mailing lists, and release downloads, but the source code is no longer available there.

You can find the release notes for the 6.0.9 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We made it possible to customize the set of result codes that the LDAP SDK uses to determine whether a connection may no longer be usable. Previously, we used a hard-coded set of result codes, and that is still the default, but you can now override that using the ResultCode.setConnectionNotUsableResultCodes method.
  • We added a new HTTPProxySocketFactory class that can be used to establish LDAP and LDAPS connections through an HTTP proxy server.
  • We added a new SOCKSProxySocketFactory class that can be used to establish LDAP and LDAPS connections through a SOCKSv4 or SOCKSv5 proxy server.
  • We updated the ldap-diff tool to add a --byteForByte argument that can be used to indicate that it should use a byte-for-byte comparison when determining whether two attribute values are equivalent rather than using a schema-aware comparison (which may ignore insignificant differences in some cases, like differences in capitalization or extra spaces). Previously, the tool always used byte-for-byte matching, but we decided to make it a configurable option, and we determined that it is better to use schema-aware comparison by default.
  • We fixed an issue in which a non-default channel binding type was not preserved when duplicating a GSSAPI bind request. We also added a GSSAPIBindRequest.getChannelBindingType method to retrieve the selected channel binding type for a GSSAPI bind request.
  • We added a ResultCode.getStandardName method that can be used to retrieve the name for the result code in a form that is used to reference it in standards documents. Note that this may not be available for result codes that are not defined in known specifications.
  • We added a mechanism for caching the derived secret keys used for passphrase-encrypted input and output streams so that it is no longer necessary to re-derive the same key each time it is used. This can dramatically improve performance when the same key is used multiple times.
  • We updated the StaticUtils.isLikelyDisplayableCharacter method to consider additional character types to be displayable, including modifier symbols, non-spacing marks, enclosing marks, and combining spacing marks.
  • We added a new StaticUtils.getCodePoints method that can be used to retrieve an array of the code points that comprise a given string.
  • We added a new StaticUtils.unicodeStringsAreEquivalent method that can be used to determine whether two strings represent an equivalent string of Unicode characters, even if they use different forms of Unicode normalization.
  • We added a new StaticUtils.utf8StringsAreEquivalent method that can be used to determine whether two byte arrays represent an equivalent UTF-8 string of Unicode characters, even if they use different forms of Unicode normalization.
  • We added a new StaticUtils.isValidUTF8WithNonASCIICharacters method that can be used to determine whether a given byte array represents a valid UTF-8 string that contains at least one non-ASCII character.
  • We updated the client-side support for the collect-support-data administrative task to make it possible to specify the start and end times for the set of log messages to include in the support data archive.
  • We updated the documentation so that the latest versions of draft-melnikov-sasl2 and draft-melnikov-scram-sha-512 are included in the set of LDAP-related specifications.

UnboundID LDAP SDK for Java 6.0.8

We have just released version 6.0.8 of the UnboundID LDAP SDK for Java. It is available for download from GitHub and SourceForge, and it is available in the Maven Central Repository.

Note that this is the last release for which the LDAP SDK source code will be maintained in both the GitHub and SourceForge repositories. The LDAP SDK was originally hosted in a subversion repository at SourceForge, but we switched to GitHub as the primary repository a few years ago. We have been relying on GitHub’s support for accessing git repositories via subversion to synchronize changes to the legacy SourceForge repository, but that support is being discontinued. The SourceForge project will continue to remain available for the discussion forum, mailing lists, and release downloads, but up-to-date source code will only be available on GitHub.

You can find the release notes for the 6.0.8 release (and all previous versions) at https://docs.ldap.com/ldap-sdk/docs/release-notes.html, but here’s a summary of the changes:

  • We added a DN.getDNRelativeToBaseDN method that can be used to retrieve the portion of DN that is relative to a given base DN (that is, the portion of a DN with the base DN stripped off). For example, if you provide it with a DN of “uid=test.user,ou=People,dc=example,dc=com” and a base DN of “dc=example,dc=com”, then the method will return “uid=test.user,ou=People”.
  • We added LDAPConnectionPool.getServerSet and LDAPThreadLocalConnectionPool.getServerSet methods that can be used to retrieve the server set that the connection pool uses to establish new connections for the pool.
  • We updated the Filter class to alternative methods with shorter names for constructing search filters from their individual components. For example, as an alternative to calling the Filter.createANDFilter method for constructing an AND search filter, you can now use Filter.and, and as an alternative to calling Filter.createEqualityFilter, you can now use Filter.equals. The older versions with longer method names will remain available for backward compatibility.
  • We added support for encrypted PKCS #8 private keys, which require a password to access the private key. The PKCS8PrivateKey class now provides methods for creating the encrypted PEM representation of the key, and the PKCS8PEMFileReader class now has the ability to read encrypted PEM files. We also updated the manage-certificates tool so that the export-private-key and import-certificate subcommands now support encrypted private keys.
  • We updated PassphraseEncryptedOutputStream to use a higher key factory iteration count by default. When using the strongest available 256-bit AES encryption, it now follows the latest OWASP recommendation of 600,000 PBKDF2 iterations. You can still programmatically explicitly specify the iteration count when creating a new output stream if desired, and we have also added system properties that can override the default iteration count without any code change.
  • We added a PassphraseEncryptedOutputStream constructor that allows you to provide a PassphraseEncryptedStreamHeader when creating a new instance of the output stream. This will reuse the secret key that was already derived for the provided stream header (although with newly generated initialization vector), which can be significantly faster than deriving a new secret key from the same passphrase.
  • We added a new ObjectTrio utility class that can be useful in cases where you need to reference three typed objects as a single object (for example, if you want a method to be able to return three objects without needing to define a new class that encapsulates those objects). This complements the existing ObjectPair class that supports two typed objects.
  • We updated the documentation to include RFC 9371 in the set of LDAP-related specifications. This RFC formalizes the process for requesting a private enterprise number (PEN) to use as the base object identifier (OID) for your own definitions (e.g., for use in defining custom attribute types or object classes). The OID-related documentation has also been updated to provide a link to the IANA site that you can use to request an official base OID for yourself or your organization.

Ping Identity Directory Server 9.2.0.0

We have just released version 9.2.0.0 of the Ping Identity Directory Server. See the release notes for a complete overview of changes, but here’s my summary:

Potential Backward Compatibility Issues

Summary of New Features and Enhancements for All Products

  • Added support for Java 17 [more information]
  • Added support for accessing external services through an HTTP proxy server [more information]
  • Added a Prometheus monitoring servlet extension [more information]
  • Added support for authenticating to Amazon AWS using an IRSA role [more information]
  • Added support for generating digital signatures with encryption settings definitions [more information]
  • Updated replace-certificate when running in interactive mode so that it can re-prompt for a certificate file if the initial file existed but did not contain valid certificate data

Summary of New Features and Enhancements for the Directory Server

  • Improved support for data security auditors [more information]
  • Added new secure, connectioncriteria, and requestcriteria access control keywords [more information]
  • Added support for defining resource limits for unauthenticated clients [more information]
  • Added Argon2i, Argon2d, and Argon2id password storage schemes to supplement the existing Argon2 scheme [more information]
  • Changed the default value of the replication-purge-obsolete-replicas global configuration property from false to true
  • Updated migrate-ldap-schema to support migrating attribute type definitions from Active Directory in spite of their non-standards-compliant format
  • Improved the usage text for the dsreplication enable command

Summary of New Features and Enhancements for the Directory Proxy Server

  • Exposed the maximum-attributes-per-add-request and maximum-modifications-per-modify-request properties in the global configuration

Summary of New Features and Enhancements for the Synchronization Server

  • Added support for synchronizing to SCIMv2 destinations [more information]
  • Added a sync-pipe-view tool that can display information about the set of sync pipes configured in the server
  • Added sync pipe monitor attributes related to account password policy state when synchronizing to a Ping Identity Directory Server

Summary of Bug Fixes

  • Fixed an issue that could cause replication protocol messages to be dropped, potentially resulting in paused replication
  • Fixed an issue in which a timeout could prevent adding servers to a large topology
  • Fixed an issue in which an unexpected error could cause a replication server to stop accepting new connections
  • Fixed an issue that prevented resource limits from being set properly for the topology administrator
  • Fixed an issue in which the dsreplication tool incorrectly handled DNs in a case-sensitive manner
  • Fixed an issue that could cause dsreplication enable to fail if there were any topology administrators without passwords
  • Fixed an issue that could cause a configured idle timeout to interfere with replica initialization
  • Fixed an issue that could prevent the server from generating an administrative alert when clearing an alarm that triggered an alert when it was originally raised
  • Fixed an issue that could cause degraded performance to a PingOne sync destination
  • Fixed an issue that could prevent users from changing their own passwords with the password modify extended operation if their account was in a “must change password” state and the request passed through the Directory Proxy Server
  • Fixed an issue in which dsconfig would always attempt to use simple authentication when applying changes to servers in a group, regardless of the type of authentication used when launching dsconfig
  • Fixed an issue that could cause certain kinds of Directory REST API requests to fail if they included the uniqueness request control
  • Fixed an issue in which an unclean shutdown could cause the server to create exploded index databases
  • Disabled the index cursor entry limit by default, which could cause certain types of indexed searches to be considered unindexed
  • Fixed an issue that could adversely affect performance in servers with a large number of virtual static groups

Removed Support for Incremental Backups

We have removed support for incremental backups. This feature was deprecated in the 8.3.0.0 release after repeated issues that could interfere with the ability to properly restore those backups. These issues do not affect full backups, which continue to be supported.

As an alternative to full or incremental backups, we recommend using LDIF exports, which are more useful and more portable than backups. They are also typically very compressible and can be taken more frequently than backups without consuming as much disk space. Further, the extract-data-recovery-log-changes tool can be used in conjunction with either LDIF exports or backups to replay changes recorded in the data recovery log since the time the LDIF export or backup was created.

Updated the Groovy Language Version

In order to facilitate support for Java 17, we have updated the library providing support for the Groovy scripting language from version 2.x to 3.x. While this should largely preserve backward compatibility, there may be some issues that could prevent existing Groovy scripted extensions from continuing to work without any problems.

The only compatibility issue that we have noticed is that the 3.x version of the Groovy support library cannot parse Java import statements that are broken up across multiple lines, like:

import java.util.concurrent.atomic.
            AtomicLong;

This was properly handled in Groovy 2.x, but the Groovy 3.x library does not appear to support this. To address the problem, you will need to update the script to put the entire import statement on a single line, like:

import java.util.concurrent.atomic.AtomicLong;

If you have any Groovy scripted extensions, we strongly recommend verifying them in a test environment before attempting to update production servers.

Java 17 Support

We have updated the server to support running on JVMs running Java version 17, which is the latest LTS release of the Java language. Java versions 8 and 11 also continue to be supported.

Note that Java 17 support is limited to the Directory Server, Directory Proxy Server, and Synchronization Server. Java 17 is not supported for the Metrics Engine, although it continues to be supported on Java 8 and 11.

The best way to enable Java 17 support is to have the JAVA_HOME environment variable set to the path of the Java 17 installation when installing the server using either the setup or manage-profile setup commands. It’s more complicated to switch to Java 17 for an existing instance that was originally set up on Java 8 or 11 because there are changes in the set of JVM arguments that should be used with Java 17. As such, if you want to switch to Java 17, then we recommend installing new instances and migrating the data to them.

By default, installations using Java 17 will use the garbage first garbage collection algorithm (G1GC), which is the same default as Java 11. We also support using the Z garbage collector (ZGC) on Java 17, although we have observed that it tends to consume a significantly greater amount of memory than the garbage first algorithm. While ZGC can exhibit better garbage collection performance than G1GC, if you wish to use it, we recommend configuring a smaller JVM heap size and thoroughly testing the server under load and at scale before enabling it in production environments.

HTTP Forward Proxy Support

We have updated several server components to provide support for issuing outbound HTTP and HTTPS requests through a proxy server. Updated components include:

  • The Amazon Key Manager cipher stream provider
  • The Amazon Secrets Manager cipher stream provider, passphrase provider, and password storage scheme
  • The Azure Key Vault cipher stream provider, passphrase provider, and password storage scheme
  • The PingOne pass-through authentication plugin
  • The PingOne sync source and destination
  • The Pwned Passwords password validator
  • The SCIMv1 sync destination
  • The SCIMv2 sync destination
  • The Twilio alert handler and OTP delivery mechanism
  • The UNBOUNDID-YUBIKEY-OTP SASL mechanism handler

To enable HTTP forward proxy support for any of these components, first, create an HTTP proxy external server configuration object with a command like:

dsconfig create-external-server \
     --server-name "Example HTTP Proxy Server" \
     --type http-proxy \
     --set server-host-name:proxy.example.com \
     --set server-port:3128

You can also optionally use the basic-authentication-username and basic-authentication-passphrase-provider properties if the HTTP proxy server requires authentication.

Once the HTTP proxy external server has been created, update the target component to reference that server. For example:

dsconfig set-password-validator-prop \
     --validator-name "Pwned Passwords" \
     --set "http-proxy-external-server:Example HTTP Proxy Server"

Prometheus Monitoring Servlet Extension

We have added support for a new HTTP servlet extension that can be used to expose certain server metrics in a format that can be consumed by Prometheus or other monitoring systems that support the OpenMetrics data format. To enable it, add the servlet extension to the desired HTTP connection handlers and either restart the server or disable and re-enable those connection handlers. For example:

dsconfig set-connection-handler-prop \
     --handler-name "HTTPS Connection Handler" \
     --add "http-servlet-extension:Prometheus Monitoring" \
     --set enabled:false

dsconfig set-connection-handler-prop \
     --handler-name "HTTPS Connection Handler" \
     --set enabled:true

By default, the server is preconfigured to expose a variety of metrics. You can customize this to remove metrics that you don’t care about, or to add additional metrics that we didn’t include by default. Any single-valued numeric monitor attribute can be exposed as a metric. You can also customize the set of labels included in metric definitions, on both a server-wide and per-metric basis.

Improved AWS Authentication Support

The server offers a number of components that can interact with Amazon Web Services components,
including:

  • A cipher stream provider that can use the Key Management Service
  • A cipher stream provider, passphrase provider, and password storage scheme that can use the Secrets Manager

In the past, you could authenticate to AWS using either a secret access key or using an IAM role that is associated with the EC2 instance or EKS container in which the server is running. In the 9.2.0.0 release, we’re introducing support for authenticating with an IRSA (IAM role for service accounts) role. We are also adding support for a default credentials provider chain that can attempt to automatically identify an appropriate authentication method for cases in which the server is running in an AWS environment, or in cases where information about a secret access key is available through either environment variables or Java system properties.

To use the new authentication methods, first create an AWS external server that specifies the desired value for the authentication-method property. Then, reference that external server when creating the desired component. For example:

dsconfig create-external-server \
     --server-name AWS \
     --type amazon-aws \
     --set authentication-method:irsa-role \
     --set aws-region-name:us-east-2

dsconfig create-cipher-stream-provider \
     --provider-name KMS \
     --type amazon-key-management-service \
     --set enabled:true \
     --set aws-external-server:AWS \
     --set kms-encryption-key-arn:this-is-the-key-arn

Data Security Auditor Improvements

The server offers a data security auditor framework that can be used to iterate across entries in a number of backends and examine them for potential security-related issues or items of note. In the past, we’ve offered auditors that can do the following:

  • Identify entries that define access control rules
  • Identify accounts that have been administratively disabled
  • Identify accounts that have passwords that are expired, are about to expire, or that have not been changed in longer than a given length of time
  • Identify accounts that are locked as a result of too many authentication failures, because it’s been too long since the user last authenticated, or because they did not choose a new password in a timely manner after an administrative reset.
  • Identify accounts with multiple passwords
  • Identify accounts with privileges assigned by real or virtual attributes
  • Identify accounts encoded with a variety of weak password storage schemes, including 3DES, AES, BASE64, BLOWFISH, CLEAR, MD5, RC4, or the default variant of the CRYPT scheme

In the 9.2 release, we’ve introduced support for several new types of data security auditors, including those that can do the following:

  • Identify accounts with account usability errors, warnings, and/or notices
  • Identify accounts that have an activation time in the future, an expiration time in the past, or an expiration time in the near future
  • Identify accounts that have passwords encoded with a deprecated password storage scheme
  • Identify accounts that have not authenticated in longer than a specified period of time, or that have not ever authenticated
  • Identify accounts that reference a nonexistent password policy
  • Identify entries that match a given search filter

We have also updated the Server SDK so that you can create your own data security auditors to use whatever logic you want.

In addition, we have updated the locked account data security auditor so that it can identify accounts that are locked as a result of attempting to authenticate with a password that fails password validator criteria, and we have updated the weakly encoded password data security auditor so that the following schemes are also considered weak: SMD5, SHA, SSHA, and the MD5 variant of the CRYPT scheme.

Finally, we’ve introduced support for a new audit data security recurring task that you can use to have the server automatically perform an audit on a regular basis.

New Access Control Keywords

We have introduced three new access control keywords.

The secure bind rule can be used to make access control decisions based on whether the client is using a secure connection (e.g., LDAPS or LDAP with StartTLS) to communicate with the server. Using a bind rule of secure="true" indicates that the ACI only applies to clients communicating with the server over a secure connection, while secure="false" indicates that the ACI only applies to clients communicating with the server over an insecure connection.

The connectioncriteria bind rule can be used to make access control decisions based on whether the client connection matches a specified set of connection criteria. The value of the bind rule can be either the name or the DN of the desired connection criteria.

The requestcriteria target can be used to make access control decisions based on whether the operation matches a specified set of request criteria. The value of the target can be either the name or the DN of the desired request criteria.

Note that because the Server SDK provides support for creating custom types of connection and request criteria, the introduction of these last two bind rules adds support for being able to define custom access control logic if the server’s existing access control framework doesn’t support what you want.

Resource Limits for Unauthenticated Clients

The server’s global configuration includes the following configuration properties that can be used to set default resource limits that will apply to all users that don’t have specific limits set for them:

  • size-limit — Specifies the maximum number of entries that can be returned for a search operation
  • time-limit — Specifies the maximum length of time the server should spend processing a search operation
  • idle-time-limit — Specifies the maximum length of time that a client connection may remain established without any operations in progress
  • lookthrough-limit — Specifies the maximum number of entries that the server can examine in the course of processing a search operation

These properties set global defaults for all clients, including those that aren’t authenticated. However, you may want to set lower limits for unauthenticated connections than for users that are authenticated. To make that easier to accomplish, we have added the following new additional properties that specifically apply to unauthenticated clients:

  • unauthenticated-size-limit
  • unauthenticated-time-limit
  • unauthenticated-idle-time-limit
  • unauthenticated-lookthrough-limit

By default, these properties don’t have any values, which will cause the server to inherit the value from the property that doesn’t specifically apply to unauthenticated clients (for example, if unauthenticated-size-limit is not set, then the server will use the size-limit value as the default for both authenticated and unauthenticated clients).

Improved Signature Generation

The server supports cryptographically signing log messages, backups, and LDIF exports. Previously, those signatures were always generated with MAC keys shared among other servers in the same topology. These keys are difficult to back up and restore, and the resulting signatures cannot be verified outside of the topology.
In the 9.2.0.0 release, we have updated the server so that it now generates digital signatures with encryption settings definitions. The server’s preferred definition will be used by default, but you can specify an alternative definition with the signing-encryption-settings-id property in the crypto manager configuration.

If digital signing is enabled but no encryption settings definitions are available, then a legacy topology key will continue to be used as a fallback.

Additional Argon2 Password Storage Schemes

The Argon2 key derivation function is a popular mechanism for encoding passwords, especially after it was selected as the winner of a password hashing competition in 2015. We introduced support for an ARGON2 password storage scheme in the 8.0.0.0 release.

There are actually three variants of the Argon2 algorithm:

  • Argon2i — Provides better protection against side-channel attacks. The existing ARGON2 scheme uses this variant.
  • Argon2d — Provides better protection against GPU-accelerated attacks.
  • Argon2id — Mixes the strategies used in the Argon2i and Argon2d variants to provide a degree of protection against both types of attacks.

In the 9.2.0.0 release, we are introducing three new password storage schemes, ARGON2I, ARGON2D, and ARGON2ID, which provide explicit support for each of these variants.

Note that if you want to use the Argon2 algorithm to encode passwords, and you need to run in an environment that contains pre-9.2.0.0 servers, then you should use the existing ARGON2 scheme. The newer schemes should only be used in environments containing only servers running version 9.2.0.0 or later.

SCIMv2 Sync Destination

The Synchronization Server has included support for SCIMv1 servers as a sync destination since the 3.2.2.0 release. This support relies on an XML-based configuration to map LDAP source attributes to SCIM destination attributes.

In the 9.2.0.0 release, we’re introducing support for SCIMv2 servers as a sync destination. For this destination, all of the necessary configuration is held in the server’s configuration framework, so there is no need for a separate file with mapping information. This implementation introduces several new types of configurable components, including:

  • HTTP authorization methods, which provide support for a variety of mechanisms for authenticating to HTTP-based services, including basic authentication and OAuth 2 bearer tokens (and in the latter case, you may configure either a static bearer token or have the server obtain one from an OAuth authorization server using the client_credentials grant type).
  • A SCIM2 external server, which provides the SCIM service URL, authorization method, and other settings to use when interacting with the SCIMv2 service.
  • SCIM2 attribute mappings, which describe how to generate SCIM attributes from the LDAP representation of a source entry.
  • SCIM2 endpoint mappings, which associate a set of attribute mappings with an endpoint in the SCIMv2 server.
  • The SCIM2 sync destination, which associates the SCIM2 external server and the SCIM2 endpoint mappings.

The documentation describes the process for configuring the Synchronization Server to synchronize changes to a SCIMv2 server. In addition, the config/sample-dsconfig-batch-files/configure-synchronization-to-scim2.dsconfig file provides an example that illustrates a set of changes that can be used to synchronize inetOrgPerson LDAP entries to urn:ietf:params:scim:schemas:core:2.0:UserM. SCIMv2 entries.