UK federation Metadata Aggregation
One of the systems I work on is the back end of the UK federation’s metadata system. Although I’ve talked about this in several presentations, the bare structural diagram isn’t very informative on its own. Here, I present a snapshot of the architecture, and go into a lot more depth on the what, how and why than you’d get from just the slide on its own (click on the image to get a larger version).
I hope that this article can perform double duty as a case study for the Shibboleth metadata aggregator tool, which acts as the engine behind the metadata system and to which I also contribute as a developer.
Metadata Repository
The main source of metadata processed and published by the UK federation is the federation’s own metadata repository, shown at the top left as a “database” labelled uk*.xml
. This is literally a collection of XML files, one for each <md:EntityDescriptor>
registered by the federation’s members.1 These files are held in a Subversion repository and exposed to the back end as a directory of “fragment” files.
This is in some ways a fairly primitive approach (many federations have chosen to build their tools around a relational database) but we have stuck with it because we have found that it has a number of advantages. One big advantage is that it separates the concerns of acquisition and maintenance of metadata on behalf of members from the back-end processes very cleanly. At the same time, using something like Subversion means that we have an automatic audit trail of every change to registered metatata, including the details of the change, when the change happened, who made it and why. We even have the option of rolling individual entities back to any earlier state if required. These all seem like good things to have at the heart of a trust brokerage system, but as far as I know the only other system which has so far taken the same approach of using a source control repository as the central data store has been the PEER software.
Metadata Exchange Inputs
Metadata that doesn’t come from UK federation members must by definition come from elsewhere, and I’ve drawn the diagram to show four such metadata exchange (MDX) relationships with four hypothetical partner federations: FedA and FedB have a “production” relationship with the UK federation, while FedC and FedD have yet to reach that state and are in “pre-production”.2
I’ve drawn the four partner federations as little clouds because I want to be nebulous about exactly how this system interacts with them. In practice, of course, most such interactions will at least start by fetching metadata (usually in the form of a <md:EntitiesDescriptor>
document) from an agreed location.
Generating the Output Aggregates
The UK federation generates and publishes quite a few metadata aggregates (SAML <md:EntitiesDescriptor>
documents residing at well-known locations). You can see them listed down the right hand side of the diagram, as outputs from the system, along with one non-metadata document which provides a statistical summary in HTML format. If you want the full details, you can find them in the Federation Technical Specifications; for these purposes, though, all you need to know is that:
- Most federation members consume the “Production” aggregate.
- The “Production”, “Fallback” and “WAYF/DS” aggregates are very similar, with only minor formatting and entity selection differences.
- The “Test” aggregate is where we try out new things; it is consumed only by knowing guinea-pigs.
- The “Export” aggregate is what metadata exchange partners are expected to consume, as a complement to our consumption of their “FedX” aggregate on the left hand side of the diagram.
The process of generating all of these documents is an off-line “daily signing run” performed by an authorised member of the federation team, who wields a cryptographic token containing the federation’s signing key. Again, there are negative and positive features of such a relatively unsophisticated approach: in this case, “there’s a human in the loop” falls strongly into both categories.
The Shibboleth Metadata Aggregator
The majority of the actual work is performed by the Shibboleth Metadata Aggregator command-line tool. This takes all of the inputs on the left and produces all of the outputs on the right, in a single invocation.3
The Shibboleth metadata aggregator (MDA from now on) is a Java framework which processes collections of items by applying a sequence of configured stages.
In the case of the UK federation metadata system, all of the items are of type DOMElementItem
, and “wrap” a DOM document. This allows any XML-based document to be processed as an item, but it’s worth noting that the MDA framework is completely generic. This means that you could use the same framework to process any kind of information (JSON, for example) just by writing some additional classes.
Stages are implemented as Java beans, which is to say they are instances of Java classes with some properties set at configuration time. That’s not as limiting as it might sound, as Java has mechanisms to allow calling a number of other languages: it’s fairly easy to write stage implementations that allow those languages to be used. For example, the MDA distribution includes a number of stage definitions to allow the use of the XPath and XSLT languages in various ways, and these are heavily used in the UK federation metadata system.
Each stage implementation can do as much, or as little, as makes sense. Typically, though, each performs a simple task and relies on being combined with other stages to build up functionality. This kind of approach will be familiar to some readers from the Unix command line, where small utilities are often connected together in sequence to achieve more sophisticated effects. It is no coincidence that the MDA uses the same term, “pipeline”, for its major grouping construct.
Plumbing: Pipelines, Branching and Merging
When you invoke the MDA from the command-line, you provide the name of a Spring configuration file, and the name of a Pipeline
bean to execute. In the simplest case, that pipeline will contain stages to retrieve, transform and then serialise the data you’re processing. You can also perform simple aggregation with just one pipeline: if the pipeline includes multiple DOMFilesystemSourceStage
s, for example, the items fetched by each will be added to the collection as it passes along the pipeline.
The UK federation metadata system is a bit more complex, and requires multiple pipelines to be used to achieve the effects required. The main generate
pipeline invoked from the command line is shown in the diagram as the sequence of blocks connected by doubled arrows, running from the files representing UK-registered entities, through “collect, check and process”, two “merge” blocks and the “testPipeline”.4
The “production merge” and “pre-production merge” blocks are instances of PipelineMergeStage
, which cause additional pipelines to be executed and their results merged into the calling pipeline’s collection according to some defined strategy. In our case, we use DeduplicatingItemIdMergeStrategy
to resolve conflicts so that any entity registered with the UK federation takes precedence over an entity with the same entityID
offered by FedA, which in turn would take precedence over an entity with the same entityID
offered by FedB. This is not the only merge strategy one could come up with,5 but it’s simple and gives predictable, unsurprising results.
The opposite of merging in results from other pipelines is to branch off from the generate
pipeline in order to create multiple output streams. This happens twice in the generate
pipeline, at the two places where normal arrows branch off from the double-lined path.6 The PipelineDemultiplexerStage
is used in both cases; this stage takes a list of predicate/pipeline pairs and can create any number of child pipelines. For each, the collection is filtered according to the provided predicate and then the nominated pipeline is invoked on the filtered collection. Often, the predicate is simply “everything” and the pipeline starts with a copy of the collection. Another simple option, used in the exportPipeline
and wayfPipeline
branches, is to use XPathItemSelectionStrategy
to select only the items matching an arbitrary XPath expression, such as “not labelled as hidden from the main WAYF”:
/md:EntityDescriptor[not(md:Extensions/wayf:HideFromWAYF)]
Although at the moment I only use it in one rather obscure corner of the system, I can’t really close the “Plumbing” section without at least mentioning the SplitMergeStage
. This splits the collection according to a predicate you supply, runs different pipelines on those two sub-collections, then merges the results using your chosen strategy. Handy.
Metadata Validation
Metadata aggregation would be easy if it was just a question of gluing smaller XML documents together to make a bigger one. However, any real-world metadata service needs to apply policy in various ways to be useful. Sometimes that’s a question of transforming or selecting metadata (for example, using DeduplicatingItemIdMergeStrategy
to resolve merge conflicts) but sometimes we want to say “condition X is not permitted to occur; if it does, handle it by doing Y”. The MDA framework has a generic approach to this kind of requirement, and we use it heavily in the UK system.
As well as the “wrapped” DOM document itself, each item carries around a collection of item metadata which can be used for any purpose by the stages processing the item. For example, a stage might check for a prohibited condition and add an ErrorStatus
to the item metadata if the condition was detected. It would be left to a later stage to take appropriate action (issuing a warning, deleting the item, or even halting the system). One advantage of this separation between detection and handling is that all of an item’s status messages can be displayed at the specified point in the pipeline; another is of course that the stages that handle detection of conditions don’t have to understand all the possible ways in which they might be handled.
The MDA distribution includes a XMLSchemaValidationStage
to check that an item is schema-valid against any of a provided collection of schema documents. The UK federation system checks against schema documents for 22 namespaces, and has a separate check to report any elements in “rogue” namespaces we don’t have a schema for.
Many of the other checks we run are instances of XSLValidationStage
, which implements a validation framework I had previously developed independently. This uses XSLT transforms to do XML pattern matching. Here’s a simple example:
<xsl:template
match="ds:KeyInfo/*[namespace-uri() != 'http://www.w3.org/2000/09/xmldsig#']">
<xsl:call-template name="error">
<xsl:with-param name="m">
ds:KeyInfo child element not in ds namespace
</xsl:with-param>
</xsl:call-template>
</xsl:template>
Metadata accidentally violating this rule causes some Shibboleth 1.3 SPs to dump core, so obviously we’d rather fix that mistake than have production services fall over.
XSLT and XPath work well for simple XML pattern matching, but aren’t much good outside that realm. If things get more complicated, it’s fairly easy to write a Java class to detect the condition you’re looking for and add an appropriate ErrorStatus
to the item:
item.getItemMetadata().put(new ErrorStatus(...));
For example, I’ve written a stage to check that valid CIDR notation is used in <mdui:IPHint>
elements. This would be impractical in pure XPath/XSLT; in Java, the hard work is all done by a single call to an OpenSAML utility class.
It’s sensible to write many small tests rather than a few large ones, so we’ve ended up with a large number of individual checking stages. The easiest way to keep this manageable (particularly if you apply the same tests in multiple places in the system, as we do) is to group the tests together at various levels by combining stages into a CompositeStage
. At the highest level, including a single CHECK_std
stage at any point in the system applies the full battery of checks; what is done with any detected problems depends on context.
Collect, Check & Process
The UK federation metadata system’s generate
pipeline begins by fetching the metadata for all entities registered with the federation using the DOMFilesystemSourceStage
. We then transform the metadata for each item in various ways to bring it into a consistent state: remember that this metadata has been collected from members over a span of years dating back well before the UK federation’s official launch in 2006, and things have changed a lot in that time. For example, one stage synthesises <mdrpi:RegistrationInfo>
elements for entities registered before that standard even existed.
Most of these transforms are stages based on XSLTransformationStage
, which as you might guess allows you to apply an arbitrary XSLT transform to each item. XSLT really shines here: it’s very easy to write a transform that targets some pattern in XML and replace instances of it with something else.
One XSLTransformationStage
stage handles the injection of the thousands of scopes representing UK schools into the metadata for the schools sector’s shared identity provider entities. The XSLT in this stage is very complex, and takes around four seconds of CPU time to run; replacing it with a Java-based stage would reduce both complexity and runtime.
EntityDescriptorItemIdPopulationStage
is used to extract each entity’s entityID
and place it into the item’s metadata as an ItemId
object. This is used to identify the entity when reporting errors, and it is used in other circumstances as a canonical name for the item. For example, DeduplicatingItemIdMergeStrategy
uses ItemId
as the name of the item to compare so that the strategy implementation can be used on any kind of item, rather than just on SAML metadata.
After performing these transformations, the resulting items are subjected to our full battery of checks, including schema checks and some checks specific to entities registered with the UK federation, such as the rule that each such entity must possess an <md:OrganizationName>
element matching the canonical name of one of our members. Obviously, we don’t impose that particular check on metadata we acquire from metadata exchange partners.
Any errors at all detected at this point (and represented by ErrorStatus
objects attached to the items) represent mistakes in our metadata repository. A sequence of stages culminating in an ItemMetadataTerminationStage
ensures that any such errors are reported and result in the signing process being immediately abandoned so that the error can be corrected. To reduce the chance of an error being detected during the daily signing run, we operate an instance of the Jenkins continuous integration server; this runs an abbreviated pipeline whenever the repository is changed, and e-mails the team if an error is encountered.
Metadata Exchange Input Channels
The pipelines which run to fetch metadata from our metadata exchange partners follow the same general approach as we use for UK-registered metadata, but there are some significant differences.
These pipelines normally begin by using DOMResourceSourceStage
to fetch a single item representing a metadata aggregate from a well-known URL rather than fetching many individual items from the local file system. Because this step involves the Internet, we now have to account for the possibility that an attacker has substituted evil metadata for the metadata our partner intended us to have:
-
XMLSignatureValidationStage
checks that this metadata came from our partner. This protects against substitution attacks. -
ValidateValidUntilStage
checks both that the aggregate has avalidUntil
attribute at all, and that the metadata provided is still valid because that instant is not yet in the past. Together, these protect against replay attacks.
If either of these checks fails, the signing process is abandoned in the same way as if a critical error had been detected in our own registered metadata. For the present, this seems like the best way to handle a situation which should only arise during an active attack.
Assuming that the aggregate has proven valid, the single item representing the aggregate is broken down into one item for each component SAML EntityDescriptor
using EntitiesDescriptorDisassemblerStage
. Each resulting item is then transformed in minor ways to bring it closer to UK federation conventions:
-
<md:EmailAddress>
values are made standards-compliant by adding amailto:
scheme where necessary. - An appropriate
<mdrpi:RegistrationInfo>
element will be added for any entity which does not already possess one.
As with the UK-registered metadata, EntityDescriptorItemIdPopulationStage
is used to extract each entity’s entityID
and place it into the item’s metadata as an ItemId
object.
Before letting any metadata pass through to our members, of course we need to check its validity. In this case, we run the usual battery of tests plus some that are specific to metadata received in this way (such as checking that <mdrpi:RegistrationInfo>
elements have a registrationAuthority
value that is appropriate for this particular channel).
There is, however, a significant difference in the handling of error conditions. You will recall that if an error is detected on UK-registered metadata, the whole signing run is abandoned until the problem can be repaired. Errors in imported metadata are reported, but result in the discarding of the metadata for that particular entity rather than having an effect on all metadata from that partner, or on the signing run as a whole. In other words, when the error is one which we can’t repair ourselves, the approach is to isolate it as far as possible and continue without it.
Output Aggregate Pipelines
By comparison with the rest of the system, the pipelines used to generate the output aggregates are relatively straightforward. Most use EntitiesDescriptorAssemblerStage
to combine the many individual items into one aggregate.7 In all but the export aggregate, XSLT is used to inject the federation’s “trust root” metadata.
Before using DOMElementSerializer
to write the aggregate into an output file, a final set of checks are run to make sure that no critical errors have crept in somehow. If these fail, the signing run is of course abandoned until the system can be repaired.
Statistics Pipeline
The statistics pipeline doesn’t generate a SAML aggregate; instead, the output is a file containing statistics on UK-registered entities. For example, today it tells me that 51 service provider entities (7.3% of the registered SPs) still lack embedded key material. This is simply a matter of using EntitiesDescriptorAssemblerStage
to join everything together and running an XSLTransformationStage
on that single item. Additional beans representing information such as the federation membership list are fed in through the transformationParameters
property. The result of that transform is still a DOM item to be serialised into an output file, but it’s HTML rather than SAML metadata.
This is one of those things that sounds pretty neat when it’s a 20-line quick hack and ends up looking a lot less attractive when it’s a 1752-line XSLT document full of set:distinct
and dyn:closure
calls. The obvious alternative would be to write a stage using Java and something like the Velocity templating engine.
Final Thoughts
There’s a lot to the UK federation metadata system; on the other hand, it’s doing quite a lot. For example, we almost certainly do more validity checking of UK federation metadata than our peers do, if only because we have an extensible framework we can do that within.
The use of the Shibboleth MDA allowed me to put this system together bit by bit as I migrated functionality away from its predecessor, refactoring multiple times along the way. At the level of individual stages, there’s very little complexity at all.
In the unlikely event that you’d like more detail on any of the above, or in the more likely event that I’ve not made something as clear as it could be, please contact me either through the site or directly at ian@iay.org.uk.
[2017-06-08: updated links to MDA documentation.]
-
The single exception to this rule is Internet2’s “Spaces” wiki: Internet2 is not a member of the UK federation, but we have registered this entity on their behalf in order to benefit our own members. Formally, this is under an old memorandum of understanding; in practice, this is the kind of odd edge case I expect to be able to eliminate in the future with more metadata exchange relationships. ↩
-
At the time of writing (2012-08-10), the live back end is operating with no “production” level relationships and with only one pre-production relationship (which is due to transition to production relatively soon). That wouldn’t really show the architecture very effectively (and would quickly become outdated), so I’ve generalised here. “In the lab” I have MDX “channels” coded for about 30 of the national research and education federations; plugging those channels in is obviously more than just a technical problem. ↩
-
In practice, there are still a couple of operations which we use other tools for. A small C program I wrote long ago normalises white space in the output to minimise file size, and then actual signing of the aggregates is performed by XMLSecTool. Once the whitespace normaliser has been rewritten as an aggregator stage, both of these operations can at least in principle be included in the aggregator invocation. ↩
-
This might seem rather an odd arrangement: after all, isn’t the main point of the process to generate the production metadata, not some testing artifact? One answer is that you can decompose your overall problem in any number of ways and get the same effect: the current system takes advantage of the progressive intermediate collections that exist between the “merge” blocks. Other ways of doing the same thing could make sense, and refactoring the structure of the system can be done very easily within the MDA framework without needing any of the logic to be rewritten. ↩
-
One situation where such a precedence-based merge strategy would not apply would be in an aggregator which isn’t also a registrar, such as eduGAIN. While in the UK federation, metadata registered with us must be regarded as authoritative beyond any that could be offered by someone else (including cases in which “our” metadata is accidentally reflected back to us), eduGAIN has no ordering among its members. Instead, eduGAIN’s aggregator gives precedent to the first member federation to present metadata for a particular
entityID
; as long as that member continues to do so, that member’s metadata for the entity will be preferred even if another member presents metadata for the sameentityID
. This provides stability while still allowing an entity to “move” from one member federation to another in the long term. None of the merge strategies provided with the MDA framework have persistent state in this way, but as merge strategies are just beans, it’s pretty simple to write custom variants to do anything you need. ↩ -
If anyone has a good idea as to how to show the demultiplexer stages in the diagram more explicitly without making the diagram even harder to follow, please let me know. ↩
-
The exception is the test aggregate, which has a more complex hierarchical structure with multiple
md:EntitiesDescriptor
elements; this addresses some specific policy concerns and is likely to be a long-term direction for the UK federation. This construction is implemented by combiningSplitMergeStage
and twoEntitiesDescriptorAssemblerStage
s. ↩