Author: 
Nicholas del Pozo
Publication date: 
Thursday, 10 December, 2009
Abstract: 

This paper explores the idea that it may be possible to mitigate what are currently perceived as some of the major hurdles to any large scale, automated preservation strategy by implementing some additional functionality into next generation operating systems. It explains some of the background issues, and addresses why this may, or may not be a viable idea.

Introduction

This paper presents some of the core issues that are currently facing any effective and long term large scale digital preservation actions. It proposes that the only sustainable way in which these issues will be overcome is by changing the way in which operating systems deal with, and provide an interface for users to deal with, digital content.

It is noted that while the article focuses on the Microsoft Windows operating system, this is merely indicative of the fact that in practical terms, Microsoft holds the greatest market share of any operating system, especially in the realm of content creators. As such, the Windows platform makes an ideal candidate for the rolling examples herein. However, having said this, the ideas in this paper could just as plausibly be implemented in any other operating system, such as OS X or Linux. The primary goal of this document is not to propose any operating system specific solution, but to facilitate and engage discussion on the issue.

The “Digital Preservation Problem”

It is a general shortcoming of digital preservation as a field, that when we talk about ‘problems’, it’s usually only in a broad, over-generalised sense, and when we discuss ‘solutions’, it’s usually in the context of solving the consequences of a problem, rather than its root cause. So, when we talk about obsolescence1, we might for example talk about how to migrate files out of formats which are ‘at risk’, rather than how we might stop files becoming obsolete at all. In part, we do this because we have to: we’re not looking at future problems, but problems with which we are already engaged.

A particular side effect of focusing on the consequences, however, is that since our current problems are in fact the result of many competing factors, we have a tendency to artificially conflate many separate issues into a single ‘digital preservation issue’. Among other things, this makes it difficult to look for singular, effective solutions.

This paper looks at a particular facet of the ‘digital preservation problem’, namely, the capacity of a preserving institution to ingest and preserve files from an external origin over time. An example of this might be when an institution like the National Library of Australia is tasked with preserving a manuscript in digital that has been donated by a prominent Australian author. Although there are many viable theoretical solutions for preserving digital objects over time, such as emulation, or migration, most (if not all) long term preservation solutions rely on the capacity of an institution to accurately identify the file formats used by digital objects, and to record a meaningful context for those digital objects. This paper focuses on the difficulties that are inherent in doing this, and suggests that this functionality could be implemented at the level of operating system, in a way which would not only be of use to collecting institutions, but also to end users.

Identifying Specific Issues

There are specific issues that make it difficult to consistently and accurately prepare large volumes of digital objects for preservation. these are some of the primary issues that need to be overcome in order to effectively preserve incoming materials.

Few file formats2 are ‘preservation’ formats

Often file formats can become ‘stale’ very quickly, much quicker than their author intended. This can certainly happen when the author of the file makes a faulty assumption on the longevity of a given file format, but also simply as a natural part of the format’s intended life cycle. Some formats can be created in an atmosphere of designed obsolescence, in which the end of support for one format is planned to coincide with the release of new software, which may or may not support the older file formats (for technological or business reasons). Not only does this create a serious problem if those formats are being deposited in a repository for long term access, but also even in the time that files in these formats are being used on the user’s computer. If for whatever reason the user loses access to the software used for reading any given file format, access to files saved in that format may be lost.

This isn’t to say that there aren’t any formats which could be used for long term preservation. There are a lot of instances where for any given genre of content (e.g., audio, video, still image), there are formats which have been specifically designed for long term preservation. However, there’s generally a gap in functionality, or practicality, between these archival formats, and their more portable, temporary equivalents. So, it’s generally rare that these formats are encountered ‘in the wild’; At this time, we’re more likely to encounter a Word 97 format document than one encoded as PDF/A, for example.

Institutions Sometimes place more importance on the format than the user

For preserving institutions, sometimes the format that content arrives in can become somewhat sacrosanct, even in instances where it may not be as important to the actual creator of that content. For many users, a file format is nothing more than a momentary carrier for a given piece of content. For example, a user may save a file as a JPEG image for the sake of uploading it to a site such as Flickr. In this instance, to the user, the file format itself is irrelevant – it only represents a transport mechanism for the photo, which is the intellectual level at which they interact with that particular piece content. In many cases, the file formats that come to an institution may not have any real importance to the creator of the content that they contain. However, massive amounts of theories are developed, or resources are routinely devoted to ensure that content can be preserved in the form it was received in.

Retaining contextual information is difficult

In order to meaningfully preserve a digital object (for example, a manuscript saved in Word 97 format), not only is it necessary to know the file format itself, but also a lot of information which might not be available in the digital object’s data or metadata. Information such as authorship, for example, might be present as metadata, or actually embedded in a document itself (perhaps as a by-line, or sign off), but in many cases it is nowhere in the document. This was found to be the case for many documents during a back-log project that was conducted at the NLA, in which many old documents were found to have very little contextual information associated to them.

Even a digital object’s file format can be identified accurately and consistently, and metadata can be extracted from that file, there is still information that is generally not present in file metadata. For example, a photograph might come accompanied with embedded metadata that describes the photographer, the camera used, and the colour space of the photo, but it may not indicate if the current file that contains the photo is the original file, or a derivative of some other file. Similarly, there is no way of knowing if there were many versions of the original created, of which this is only one. For documents, it can be difficult to know if we are receiving draft, or final copy. Given a directory full of similarly named files, we can not automatically tell what the relationship between these files is. There are many more areas in which contextual information is simply not available. The utility of this data is relative, of course, but it is still true that there are many circumstances where it would help to preserve an object, and that it is generally very difficult (if not impossible) to find.

All of these issues make it very difficult to identify file formats

Thankfully, tools for identifying file formats are becoming more reliable. Tools such as DROID 4, for example, do quite a good job of identifying a broad range of formats. However, the way in which consumers use file formats, and the sheer number of file formats which are in current use, means that although this software is a good mechanism for getting a general overview of what sorts of files are present, it often cannot identify files with the sort of accuracy required for carrying out large scale, and mostly automated preservation actions.

Metadata can be stored inconsistently or incompletely

Although some programs do actually store metadata in their file formats, retrieving this data can be problematic for many of reasons. Different file formats store metadata differently (and different software can sometimes store it differently, even inside multiple instances of the same file format). Alternatively, for some file formats, the owner may not be willing to disclose how metadata is stored. This means that in many instances, even if the file can be identified, there may be no single methodology for extracting all available metadata, accurately or otherwise. Currently, a new mechanism needs to be implemented for each new file format that contains any form of metadata. The large number of formats, their permutations, and rate of growth all mean that this is an ultimately unsustainable approach.

Identifying file formats takes a long time

Even if performance is otherwise streamlined, file format identification and validation is a problematic bottle neck. The NLA is (at time of writing) running DROID 4 over a large set of sample data, taken from our PANDORA web archive. Over files that were generally quite small (fragments of websites), it took close to 40 days to process approximately 17 million files. This is obviously a non-trivial amount of time.

Summarising Above

To summarise the points above, while we do in many instances have technology that allows us to identify and extract information from some digital objects, there are also many instances in which this is not possible. More over, even in those cases where it is possible, there is a lot of contextual information about a digital object which it may no longer be possible to recover. Not always, but certainly in many cases this contextual information would be more useful in deriving meaning from the information than any of the contained metadata.

This indicates that not only is it difficult to adequately prepare a digital object for long term preservation in most cases, there are also many cases where there is simply not possible at all. For institutions that do not place limitations on incoming material, this is the current situation – quite far from ideal. At least for these types of institutions, it can also be suggested that, based on the above, the situation that would make it easiest (and therefore most likely) to preserve digital objects over time would be one in which for each file coming into the organisation:

  • the format was always known
  • the metadata could always be extracted
  • the contextual information was always available

Although there are many technical hurdles that make it unlikely that we will ever be able to reliably and consistently obtain this information from all and any file formats after their

acquisition, it is not entirely unreasonable to propose the circumstances that would make these requirements attainable, perhaps for a large quantity of our files.

As has already been mentioned, it has previously been necessary for Digital Preservation to focus on dealing with the consequences of ‘the problem’, rather than trying to engineer a situation in which the ‘the problem’ is no longer apparent. From this perspective, the solution which addresses the above points is to invest more money and resources into the avenues we have already been exploring, such as file identification, and meta-data extraction. However, at least at the time of writing, it appears there are simply too many technological hurdles for this to be a reasonable or sustainable approach for all the materials that an institution like the NLA should realistically be expected to encounter.

However, even if it is not possible to generate this data at the point at which it is received, there does exist for many file, irrespective of their type or format, a time at which this information is probably known, or could more easily be ascertained.

Consider how, even if people generally aren’t predisposed to sorting their data in arrangements that facilitate long term access, they will normally at least have their files organised enough that they can find and work with them in the very short term. For example, many people will be meticulous in ensuring they know where the document they are currently working on resides on their local disc, perhaps even manually making backup copies with different names. However, as soon as the useful lifespan for that document has past, the same people may be content to copy it to a CD-ROM, or in some instances just delete it altogether. In short, a while a digital object holds some perceived value to a user, they will endeavour to make sure it remains accessible.

What this indicates is that there is a time at which the requirements identified above are most likely to be available for any type of file, irrespective of its format or type. Specifically, this is information is most likely to be available while the file is in active use.

Additionally, in many instances, it isn’t just the user who has a greater knowledge of a file while it is still actively being used, but the operating system itself. For most of the file types that users interact with frequently, the operating system usually has an associated application via an internal registry. This is what lets a user double click on a DOC file to edit it in Word, rather than first loading Word, then explicitly pointing to the file from within the application. Even though this kind of association is stored on a group basis, and not very reliable on an individual level (try changing the extension on a .DOC file to .PDF), it still represents in theory a potentially invaluable piece of information. However, this information resides solely within the operating system. If these files are transferred elsewhere, such as onto a CD-ROM backup disc, then outside of the context of the user’s work environment, this information will probably be lost.

In part, this comes down to how file formats have historically been constructed. Essentially as a self contained object, with the metadata that the designer believes to be most important embedded directly within that object. Although this generally sufficient to allow the user to meaningfully work with that file, there are only a few file formats which store detailed preservation type data. For example, it is rare to encounter a file format that contains historical

events for that file. For collecting institutions, this can often mean that unless a file is accompanied with some kind of human readable description, this kind of information is stored nowhere at all.

Outside of collecting institutions, however, there are plenty of practical cases where the metadata stored in a file does not sufficiently address the needs of the user. For example, a user could store backups of their documents, again onto a CD-ROM. Given that they could potentially uninstall various applications, or even move to a new computer, before having a need to look at the content on that CD-ROM, they potentially might no longer have the capacity to view the stored documents. In these cases, sometimes a user might remember the software they had used previously, and manually restore access by reinstalling (assuming that it was still possible to do so!). However, it seems likely that the information would only be available out of their own heads.

Alternatively, there are instances in which internally storing metadata has proved insufficient for dealing with complex arrangements of information, even within file formats which are already quite metadata rich, such as MP3 files, which use ID3. For example, even though ID3 contains a very large range of information about the file itself, and to a limited degree can contain contextual information about the group that a given file belongs to, such as the ‘album’ field, it does not explicitly contain information about the other files that a single file is related to. So, if a user wants to create a more complex arrangement of songs, such as a series of playlists, this information needs to be generated, maintained, and interpreted outside of the MP3.

The above example also informs us that there is at least some sort of precedence for storing additional information about a file outside of the file itself, so long as the user is gaining some benefit from doing so. Usually, this is done using fairly narrow purpose applications, such as Adobe Lightroom, for managing personal photo catalogues. These applications generally take advantage of already metadata rich file formats, and include additional information outside the context of the file, where it helps to keep information organised (e.g., virtual copies of photos, or additional ‘collections’ in Lightroom).

So, it’s plausible to suggest that some of the information that is useful in the context of photos and music could also be useful to users in context of their other content. In which case, it becomes a question of identifying an appropriate location to collect, store, and manage that data. Preferably, in such a way that the information which allows a user to interact meaningfully with digital objects is still harvestable at a later date by a collecting institution.

The real problem is in maintaining this information after the file is taken out of its original authoring environment. Take the example of the flash device, for which file associations are lost when the files are taken to a new computer which may not have the same associations for those file types, or may have those file types associated to completely different (and incompatible) applications. As such, it may not be practical to rely on any single application, or combination of applications.

It has, however, been explored in theory, and in some limited sense in practicality, that the operating system itself, via the file system, could in fact be responsible for this kind of information. Microsoft has previously explored this concept in various guises, most recently as WinFS, which went as far as a small semi-public beta release, but has not yet been released, or spoken of for some time now. Even though WinFS seemed to be primarily focused on returning richer search results, rather than providing a central source of contextual data, it still represents a move towards the kind of technology that could potentially take on the additional responsibility of maintaining a context for all the files on a user’s computer.

So rather than store tagging information, or providing analysis of which songs were by the same artist, the file system itself could store more complex information, such as event and agent tracking, and the associations between user defined or generated clusters of files (for example, iterative versions of the same image).

If this kind of data recording could be coupled with an efficient way to transfer the metadata outside of the user’s operating environment, then in theory collecting institutions could potentially be in a position where the majority of the files they receive could be archived and preserved without having to run any kind of identification or metadata extraction processes.

Hindrances and Benefits

The history of WinFS will loan strength to the statement that implementing this kind of a solution would not be a trivial matter. It would not only require a real re-think of how we treat and imagine the files on our operating system, it would also require change in how applications interact with the operating system. In some ways, it would require a rethink of what role played by the operating system.

Additionally, it would be naïve to suggest that this kind of enhancement to the operating system could be implemented by a single company, and then picked up by everybody else. If such preservation functionality were to be implemented, it is unlikely we would see a consistent implementation across all operating systems.

But, even if there are significant issues that would complicate this kind of development, there is certainly significant benefits to doing so, which would, at least from the onset, appear to outweigh the negatives. There are a number of benefits not only for preserving institutions, but also for vendors and end users:

  • For files that were being moved to a file system that did not suppose this additional metadata, a ‘sidecar’ file containing the metadata (perhaps as XML) could be generated on the fly. If this were done in a standard manner, collecting institutions could take advantage of this data to aide their ingest processes.
  • Given a easy to use API, rendering and authoring programs could start checking the integrity of files while still in the possession of the user, thus helping to mitigate some of the issues associated with accepting corrupted information (for example, by generating a checksum of files at regular intervals).
  • When the user has a file on their computer that they can no longer access (say, they have a file in a document format for which they no longer have the license to the word processor), the file system could notify them, and prompt them to migrate the file to a format they could access (informing them if some file specific metadata or formatting might be lost), or to do nothing. It may be possible to privilege some migrations over others, for example, a migration path by the same vendor may provide migration options with the least amount of loss, which could be very important to the user, especially for closed source files.
  • Checking for view paths could be networked, so that if a user has a number ofmachines running at home, all three of them could share view path information. Even if the computer they are currently using didn’t have a viable view path for a file, they could be informed that there’s still a usable view path on another computer on my home network.
  • Even if there’s no longer a view path on any of a user’s computers, they could still be informed of alternative ways to access the file, be it via an online service, or by purchasing new software.
  • An external user donating to an institution could make sure that the materials are already in the best format for doing so before they are released, which would in turn give collecting institutions the capability to set stricter submission policies for digital materials. This would reduce the amount of unknown materials the institution is required to deal with.
  • Vendors could potentially use this system as a means of rolling out updates to their file formats which would otherwise be difficult to communicate to their users. As new updates to the file format become available, these could automatically be offered to users via the internet (think of the Firefox update delivery mechanism). This could potentially lower business costs associated with maintaining access to legacy formats. Users will be more likely to use operating systems that lower both the complexity of keeping old files, and which make older content more accessible. For users that keep a great deal of digital materials, including photos, music, and documents, this kind of enhancement could make certain vendor supplied solutions far more attractive than those which did not provide this kind of additional information storage.
  • If the collecting institution could more strictly define their accepted formats, any normalisation that would theoretically have to occur before a file could be submitted could take place with the author of the content, making it significantly less likely that important data was lost.

Use Cases

To further illustrate how this kind of change could be useful for end users, there follows some use case stories that illustrate how real world scenarios could benefit from this kind of technology. In all these cases, it’s important to note just how little the overall process actually impacts on the user’s normal workflows, and just how much more value it adds to their overall experience. As mentioned previously, all these use cases are Windows centric, but this is done to maintain some kind of cohesion between the examples, more than anything.

As mentioned previously, this kind of functionality could equally be implemented in various operating systems.

1. Migrating to a newer file format

Bill has been working with various versions of Word for years now. He stores almost all his text based documents in some or other version of the Word format family, usually whichever format is most current for the version he has installed at any given time. As such, he has some files that were saved in Word 97, Word 2000, and Word 2003. He even has a few documents in Word X, which were emailed to him, and a few in even earlier legacy documents, from when he was using MSDOS. Recently, he bought and installed a copy of Word 2007. Upon finishing installation, he sees a dialogue pop up on his screen which informs him that a number of files in an older version of the DOC format have been found, and gives him the option to migrate them. Bill indicates he wants to migrate the files at the current time, and so a new dialogue appears, showing the files that he’s about to migrate, as well as a list of options for which format to migrate to. These options might include the current version of Word, and perhaps another format from another vendor.

Given that Bill plans to be using Word 2007 for the moment, decides to migrate to the latest version of the Word format. A new dialogue appears, displaying the migration process as each document is migrated into the new format. When the process is over, he is given a list of the files converted, and is given a chance to QA any of the files, to make sure nothing got lost in the migration. If he werenʹt happy with one of the migrations, he would have the options to retain the older document, or retry with different settings.

2. Losing access to the primary view path

Melinda has been using Photoshop CS2 on her work laptop to view the RAW format digital photos she took on her last holidays. She’s been using Photoshop to tweak the white levels, and get some more detail out of the dark areas of each photo.

Over the weekend, she copied the photos from her laptop to her home PC, which is where she’s planning on storing them more permanently (along with copies on her external hard drive). Her home computer is brand new, and doesn’t have much software on it. It doesn’t have anything like Photoshop.

On copying the files, and turning off her laptop, a prompt appears which informs Melinda that she currently has no way of reading the files she has just copied onto her computer. It gives her the option to select a program that can read the files, as well as a list of vendor websites, from where she could download a program to read the images.

Melinda knows that there’s not really anything on the new computer that can read RAW files, and so she decides to download a program to read the photos. Since she already knows her way around Photoshop, she clicks a link that takes her to the Adobe website, from where she can purchase a new software license.

3. Donating files to a collecting institution

Steve volunteers part time for a grass roots political movement in New Zealand, and he would like to make a voluntary deposit of the files that contains their recent press releases and flyer lay outs to the National Library of New Zealand. Steve does all of his word processing using Open Office, and uses Inkscape to create the flyer graphics.

He has a Java based donation program that he downloaded from the NLNZ. He loads this, and selects the .ODT and .SVG files that he wants to upload, and initiates the transfer process. Before the files are uploaded, the Java program shows him a pop‐up dialogue which informs him that the SVG files he is about to upload are not a supported format at the NLNZ. It tells him that if he doesn’t migrate to another format, the long term preservation of those files cannot be guaranteed.

The prompt gives him the option of converting the SVG files into JPEGs, or continuing without making changes. Steve doesn’t really mind so much about keeping the flyer images in a vector format, so he migrates the files to JPEG. A new process dialogue appears, showing the migration process. When this is finished, he is given an opportunity to QA the images, to make sure they still look the way he intended. He’s satisfied that nothing has been lost that he can’t live without, and so he dismisses the dialogue, and the upload to the NLNZ continues normally.

At the NLNZ’s end, they now receive a package that contains the files sent by Steve, as well as a manifest of uploaded files, together with checksums to validate the integrity of the transfer, and detailed information about the author, the date of creation, and relevant auxiliary information about the images and documents. It also informs them that the JPEG files they have received were originally SVG files, and details their migration information. From this point, their ingest process consists of lodging the metadata, and storing the file. No additional processing is required.

4. Updating and Transferring Metadata

Melinda has been using Photoshop Elements to colour correct her photos, and has finally gotten everything to look the way she wants. While she was taking the photos, she had a small GPS unit attached to her camera, and so all her RAW files are tagged with geo‐spatial information. She has added a few text and audio annotations, as well as general text tags to help sort and contextualise the photos. She’s also ranked a few of her favourite photos.

Over the weekend, Melinda decides to send some of these photos to Jane, one of her friends who lives across the country. She knows Jane doesn’t have a copy of Photoshop, and so she batch converts her photos into JPEG format. When the Melinda attaches the photos to an email, she is given a prompt that asks if she would like to attach additional metadata.

She does so, and now, along with the JPEG images she attaches, there is a new additional file that contains the metadata (along with additional data that cannot be stored in the JPEGs, such as the audio tags Melinda recorded) for all the images in the email. This file doesn’t necessarily have to be visible to Melinda – the email program could detect that it is a metadata carrier, and hide it from view.

Across the country, when Jane opens the list of photos, Windows automatically associates the metadata that was contained in the email together with the new copies of the JPEGs transparently, and without the process being known to Jane.

As Jane starts to browse the images, she sees that not only is she looking at the photos themselves, but she can now see all of Melinda’s annotations, and even hear Melinda speak some of the audio annotations. As her viewer detects that there is geo‐spatial metadata associated with the file, it allows Melinda to view the photos in a geographical context, using the a mapping resource, such as Google Earth.

Conclusion

Old material is always going to be a problem. These kinds of enhancements to modern operating systems are unlikely to do anything for materials which have already been donated, and may not be able to do anything with some older file formats currently on user’s computers. However, what this kind of a solution could do is help us get out of the current untenable situation. Basically, if operating systems don’t start doing something to facilitate proper digital preservation soon, then we’re always going to be dealing with the same problems that were outlined above. Fundamentally, it doesn’t matter how good our file format identifiers are, they will never be as up to date as is required to properly deal with the current standard of incoming material. As a collective, we have already devoted literally billions of dollars to the problem, but if we only ever try to take care of things retrospectively, all our actions will ultimately be futile.

Realistically, there’s a Microsoft operating system in a larger number of homes that have desktop computers. In some government sectors, it can be difficult to find a non‐Windows desktop computer. Moreover, a distinctly large portion of text documents that are donated for long term storage are already in Microsoft formats, having been authored mostly in some form of Word and Excel. Even if some of the most rudimentary steps towards the above scenario were taken by vendors such as Microsoft, at least for newly generated material coming in, we would be in a much better position to focus on what’s really important – making sure that the data we have is accessible for the long term: we could actually start directing our energies and money into actually preserving this content for the future.

The author welcomes feedback, and the opportunity to discuss the ideas in this paper further.