Disk, Reordering, and Maintenance Plan Insights

Although my last post reported that I’d got through the long slog of the conversion aspects of this preservation project, in fact there was still more slog of other sorts to go. A lot more slog in fact: there was the transfer of the contents of 126 cd/dvd disks to the laptop; and there was the reordering of pages in 881 files to rectify the page order produced by scanning all front sides first and then turning over the stack of pages to scan the reverse sides at a time in the 1990s when I didn’t have a double sided scanner. In fact this exercise involved yet more conversion (from multi-page TIF file to PDF) before the reordering could be done.

This latter task really took a huge amount of time and effort and was yet another reminder of how easy it is to specify tasks in a preservation project without really appreciating how much hard graft they will entail. Having said that, it’s worth noting that my PDF application – eCopy PDF Pro – had two functions which made this task a whole easier: first, the ability to have eCopy convert a file to PDF is available in the menu brought up by right clicking on any file, thereby automatically suggesting a file title (based on the title of the original file) for the new PDF in the Save As dialogue box, and which then automatically displays the newly created file – all of which is relatively quick and easy. Second, eCopy has a function whereby thumbnails of all the pages in a document can be displayed on the screen and each page can be dragged and dropped to a new position. I soon worked out that the front-sides-then-reverse-sides scan produces a standard order in which the last page in the file is actually page 2 of the document; and that if you drag that page to be the second page in the document, then the new last page will actually be page 4 of the document and can be dragged to just before the 4th page in the document. In effect, to reorder simply means progressively dragging the last page to before page 2 and then before page 4 and then before page 6 etc until the end of the file is reached. Both these functions (to be able to click on a file title to get it converted, and to drag and drop pages around a screenfull of thumbnails) are well worth looking for in a PDF application.

Regarding the disks, I was expecting to have trouble with some of the older ones since, during the scoping work, I had encountered a few which the laptop failed to recognise. I did try cleaning such disks with a cloth without much success. However, what did seem to work was to select ‘Computer’ on the left side of the Windows Explorer Window which displays the laptop’s own drive on the right side of the window together with the any external disks that are present. For some reason, disks which kept on whirring without seeming to be recognised, just appeared on this right side of the window. I don’t profess to understand why this was happening – but was just glad that, in the end, there was only one disk that I couldn’t get the machine to display and copy its contents.

I’m now in the much more relaxed final stages of the project, defining backup arrangements and creating the Maintenance Plan and User Guide documents. The construction of the Maintenance Plan has thrown up a couple of interesting points. First, since it requires a summary of what preservation actions have been completed and what preservation issues are to be addressed next time, it would have made life easier to construct the preservation working documents in such a way that the information for the Preservation Maintenance Plan is effectively pre-specified – an obvious point really but easy to overlook – and I did overlook it…. The second point is a more serious issue. The Maintenance Plan is designed to define a schedule of work to be undertaken every few years; its certainly not something I want to be doing very often – I’ve got other things I want to do with my time. However, some of the problem files I have specified in the ‘Possible future preservation issues’ section in the Maintenance Plan could really do with being addressed straight away – or at least sooner than 2021 when I have specified the next Maintenance exercise should be carried out. I guess this is a dilemma which has to be addressed on a case by case basis. In THIS case, I’ve decided to just leave the points as they are in the Maintenance Plan so that they don’t get forgotten; but to possibly take a look at a few of them in the shorter term if I feel motivated enough.

The Conversion Slog

I’m glad to say I’ve nearly finished the long slog through the file conversion aspects of this digital preservation project. After dealing with about 900 files I just have another 50 or so Powerpoints and a few Visios to get through. It’s been a salutary reminder of how easily large quantities of digital material could be lost simply because the sheer volume of files makes for a very daunting task to retrieve them.

Below are a few of the things I’ve learnt as I’ve been ploughing through the files.

Email .eml files: These are mail messages which opened up fine in Windows Live Mail when I did the scoping work for this project. Unfortunately, since then I’ve had a system crash and Live Mail was not loaded into my rebuilt machine; and Microsoft removed all Live Mail support and downloads at the end of 2017. On searching for a solution on the net, I found several suggestions to change the extension to .mht to get the message to open in a browser. This works well, but unfortunately the message header (From, To, Subject, Date) is not reproduced. I ended up downloading the Mozilla Thunderbird email application, opening each email in turn in it, taking screenshots of each screenfull of message and copying them into Powerpoint, saving each one as a JPG, and then inserting the JPGs for all the emails in a particular category into a PDF document. A bit tortuous and maybe there are better ways of doing it – but at least I ended up with the PDFs I was aiming for.

Word for Mac 3.0 files: These files did open in MS Word 2007 – but only as continuous streams of text without any formatting. After some experimentation, I discovered that doing a carriage return towards the end of the file magically re-instated most of the formatting – though some spurious text was left at the end of the file. I saved these as DOCX files.

Word for Mac 4.0 & 5.0 and Word for Windows 1.0 & 2.0: These documents all opened up OK in Word 2007. However, I found that in longer documents which had been structured as reports with contents list, the paging had got slightly out of sync so that headings, paragraphs and bullets were left orphaned on different pages. I converted such files to DOCX format in order to have the option to reinstate the correct format in the future. Files without pagination problems, or which I had been able to fix without too much effort, were all converted to PDF.

PDF-A-1b: I have previously elected to store my PDF files in the PDF-A-1b format (designed to facilitate the long term storage of documents). However, on using the conformance checker in my PDF application (e-Copy PDF Pro) I discovered that they possessed several non-conformancies; and, furthermore, the first use of e-Copy PDF Pro’s ‘FIX’ facility does not resolve all of them. I decided that trying to make each new PDF I created conform to PDF-A-1b would take up too much time and would jeopardise the project as a whole. So, I included the following statement in the Preservation Maintenance Plan that will be produced at the end of the project: “PDF files created in the previous digital preservation exercise were not conformant to the PDF-A-1b standard, and the eCopy PDF Pro ‘FIX’ facility was unable to rectify all of the non-conformances. Consideration needs to be given as to whether it is necessary to undertake work to ensure that all PDF files in the collection comply fully with the PDF-A-1b standard.

PowerPoint – for Mac 4.0. Presentation 4.0, and 97-2003: All of these failed to open with Powerpoint 2007, so I used Zamzar to convert them. Interestingly Zamzar wouldn’t convert to PPTX – only to Powerpoint 1997-2003 which I was subsequently able to open with Powerpoint 2007. So far, it has converted over 100 Powerpoints and failed with only four (two Mac 4.0 and two Presentation 4.0). The conversions have mostly been perfect with the small exception that, in some of the files, some of the slides include a spurious ‘Click to insert title’ text box. I can’t be sure that these have been inserted during the conversion process, but I think it unlikely that I would have left so many of them in place when preparing the slides. Zamzar’s overall Powerpoint conversion capability is very good – but I have experienced a couple of irritating characteristics: first, on several occassions it has sent me an email saying the conversion has been successful but then fails to provide the converted file implying that it wasn’t able to convert the file; and second, the download screen enables five or more files to be specified for conversion but if several files are included it only converts alternate files – the other files are reported to have been converted but no converted file is provided. This problem goes away if each file is specified on its own in its own download screen. The other small constraint is that the free service will only convert a maximum of 50 files in any 24 hour period – but that seems a fair limit for what is a really useful service (at the time of writing, the fee for the cheapest level of service was $9 a month).

UPDATED and ORIGINAL: I am including UPDATED in the file title of the latest version of a file, and ORIGINAL in earlier versions of the same file, because all files relating to a specific Reference No are stored in the same Windows Explorer Folder and users need to be able to pick out the correct preserved file to open. There will be only one UPDATED file – all earlier versions will have ORIGINAL in the file title. Another way of dealing with this issue of multiple file versions would be to remove all ORIGINAL versions to separate folders. However, this would make the earlier versions invisible and harder to get at, which may not be desirable. I believe this needs further thought – and the input of requirements from future users of the collection – before the best approach can be specified.

DOCX, PPTX and XSLX: When converting MS Office documents, unless I was converting to PDF, I elected to convert to the DOCX, PPTX and XLSX formats for two reasons – it is Microsoft’s future-facing format, and that – for the time being – it provides another way of distinguishing between files that have been UPDATED and those that haven’t.

Many of these experiences came as a surprise despite the amount of scoping work that was undertaken; and that is probably inevitable. To be able to nail down every aspect of each activity would take an inordinate amount of time. There will always be a trade off between time spent planning and the amount of certainty that can be built into a plan; and it will always be necessary to be pragmatic and flexible when executing a plan.

Retrospective Preservation Observations

Yesterday I reached a major milestone. I completed the conversion of the storage of my document collection from a Document Management System (DMS) to files in Windows Folders. It feels a huge release not to have the stress of maintaining two complicated systems – a DMS and the underlying SQL database – in order to access the documents.

From a preservation perspective, a stark conclusion has to be drawn from this particular experience: the collection started using a DMS some 22 years ago during which I have undergone 5 changes of hardware, one laptop theft and a major system crash. In order to keep the DMS and SQL Db going I have had to try and configure and maintain complex systems I had no in-depth knowledge of; engage with support staff over phone, email, screen sharing and in person for many, many hours to overcome problems; and backup and nurture large amounts of data regularly and reliably. If I had done nothing to the DMS and SQL Db over those years I would long ago have ceased to be able to access the files they contained. In contrast, if they had been in Windows folders I would still be able to access them. So, from a digital preservation perspective there can be no doubt that having the files in Windows Folders will be a hugely more durable solution.

When considering moving away from a DMS I was concerned it might be difficult to search for and find particular documents. I needn’t have worried. Over the last week or so I’ve done a huge amount of checking to ensure the export from the DMS into Windows Folders had been error free. This entailed constant searching of the 16,000 Windows Folders and I’ve found it surprisingly easy and quick to find what I need. The collection has an Index with each index entry having a Reference Number. There is a Folder for each Ref No within which there can be one or more separate files, as illustrated below.

Initially, I tried using the Windows Explorer search function to look for the Ref Nos, but I soon realised it was just as easy – and probably quicker – to scroll through the Folders to spot the Ref No I was looking for. The search function on the other hand will come in useful when searching for particular text strings within non-image documents such as Word and PDF – a facility built into Windows as standard.

I performed three main types of check to ensure the integrity of the converted collection: a check of the documents that the utility said it was unable to export; a check of the DMS files that remained after the export had finished (the utility deleted the DMS version of a file after it had exported it); and, finally, a check of all the Folder Ref Nos against the Ref Nos in the Index. These checks are described in more detail below.

Unable to export: The utility was unable to export only 13 of the 27,000 documents and most of these were due to missing files or missing pages of multi-page documents.

Remaining files: About 1400 files remained after the export had finished. About 1150 of  these were found to be duplicates with contents that were present in files that had been successfully exported. The duplications probably occurred in a variety of ways over the 22 year life of the DMS including human error in backing up and in moving files from off-line media to on-line media as Laptops started to acquire more storage. 70 of the files were used to recreate missing files or to augment or replace files that had been exported. Most of the rest were pages of blank or poor scans which I assume I had discovered and replaced at the point of scanning but which somehow had been retained in the system. I was unable to identify only 7 of the files.

Cross-check of Ref Nos in Index and Folders: This cross-check revealed the following problems with the exported material from the DMS:

  • 9 instances in which a DMS entry was created without an Index entry being created,
  • 9 cases in which incorrect Ref Nos had been created in the DMS,
  • 6 instances in which the final digit of a longer than usual Ref No had been omitted (eg. PAW-BIT-Nov2014-33-11-1148 was exported as PAW-BIT-Nov2014-33-11-114),
  • 3 cases in which documents had been marked as removed in the Index but not removed from the DMS,
  • 2 cases in which documents were missing from the DMS export.

It also revealed a number of problems and errors within the 17,000 index entries. These included 12 instances in which incorrect Filemaker Doc Refs had been created, and 6 cases in which duplicated Filemaker entries were identified.

The overall conclusion from this review of the integrity of the systems managing the document collection over some 37 years, is that a substantial amount of human error has crept in, unobtrusively, over the years. Experience tells me that this is not specific to this particular system, but a general characteristic of all systems which are manipulated in some way or other by humans. From a digital preservation standpoint this is a specific risk in its own right since, as time goes by, as memories fade, and as people come and go, the knowledge about how and why these errors were made just disappears making it harder to identify and rectify them.

Started and Exported

A week ago the Pawdoc DP project started in earnest after 14 months of Scoping work. The Project Plan DESCRIPTION document and associated Project Plan CHART define a 5 month period of work in 10 separate sections. The Scoping work proved to be extremely valuable in ensuring as far as possible that the tasks in the plan are doable and of a fixed size. No doubt there will be hiccups but they should be self contained within a specific area and not affect the viability of the whole project.

It took rather longer than anticipated to get the m-Hance utility to a position where it can be used to export the PAWDOC files – though I guess such delays are typical in these kinds of transactions. First there was an issue around payment caused by the m-Hance accounting system not being able to cope with a non-company which could not be credit checked. I paid up front and the utility was released to me once the payment had gone through the bank transfer system. After that there followed a period of testing and some adjustment using the export facility WITHOUT deletion in Fish. At that point I finalised the Plan and the Schedule and started work. However, although it was believed that the utility was working as it should, there followed a frustrating week during which its operation to export WITH delete (needed so that I could check any remaining files) kept producing exception reports and the m-Hance support staff produced modified versions of the utility. There’s an obvious reminder here that nothing can be assumed until you try it out and verify it. Anyway, all is well now and the export WITH delete completed successfully late last night. I decided against re-planning to accommodate the delays in running it in the belief that I can make up the time in the course of the three weeks planned to check the output from the export.

Taking Stock

I took stock of our Amazon Music services today. We have two Echo devices – one in our kitchen-diner and the other in our conservatory – which both have access to the full Amazon Music Unlimited library (apparently containing 40 million songs). For this we’re paying £9.99 a month. If we took out an Amazon Prime subscription at £79 a year, this fee would be reduced to £7.99 a month.

I had originally planned to subscribe to the Amazon Music Storage service so that we could download those albums that are not in Music Unlimited and listen to them directly through the Echos; but this service was discontinued last month. So, to listen to those albums through the Echos, we need to play them on our iPhones and connect the iPhones to the Echos using Bluetooth – quite easy to do but a little less convenient.

Given all this, I think we have reached an end point for the time being with the development of our music playing capabilities. We have access to all our music – but still don’t seem to listen to it that much. I make occasional use of the ‘Sounds for Alexa’ book – and, indeed, have enjoyed listening to some of the new albums I picked out when I was reading the Guardian music reviews and which were included in the book. I have the Music Unlimited app on my laptop which provides lots of info about the latest music, but I haven’t really made any use of that yet; and I only occasionally hear some music on the radio in the car and then ask Alexa to play it on the Echo.

Perhaps the greatest use we’ve made of the Echo is when we had family round over the Christmas period, and people enjoyed the novelty of asking it to play their favourite songs. Apparently this is a fairly typical scenario, though it is not everybody’s cup of tea; at least one of our family positively dislikes Alexa because it just takes over the proceedings with an Alexa-fest of constant calling out, song playing, crazy question asking, and the placement of risque items on our Alexa shopping list.

Apart from music, the ability to play radio stations is definitely useful. However we have had less success with asking Alexa general questions such as sports scores: quite often Alexa doesn’t understand what we’re saying, or we fail to phrase the question in a way that Alexa can home in on the answer.  Another interesting phenomenon is that occasionally Alexa thinks we have mentioned her name when actually we’ve been saying something completely different; she suddenly pipes up out of the blue, and we have to issue a curt ‘Alexa Stop!’ to quiet her down.

No doubt Alexa’s voice recognition will improve over time; and maybe we’ll start to use the additional services that Alexa is providing now (such as links to the phone) and that she will, no doubt, be providing in the future. But, as far as our music playing capabilities go, we feel we’ve  done as much as we need to for the time being, so this journey is at an end.

Book vs Blog

Now that the content of the book has been put to bed and the focus has turned to bookbinding activities, it seems a good moment to reflect on whether this attempt to replicate a web site in book form has worked or not. First, though, it’s important to be clear about the following differences between the pwofc.com site and most other web sites:

  • there are no adverts
  • all the material is static – the content doesn’t change or move while being viewed.

Having said that, there are several standard web site/blog features in pwofc.com which the physical book may, or may not, have been able to replicate. They include:

  1. Selectable Sections
  2. Links between sections
  3. Links to background in-site material
  4. Links to external web sites
  5. Enlargement of text and images
  6. Categorisation changes
  7. Addition at will
  8. Updating at will
  9. Correction at will
  10. Device display variability
  11. Copying capability
  12. Visibility
  13. Accessibility
  14. Storage capability

Here’s how each of these features were dealt with in the physical book:

1. Selectable Sections

Blog feature: The Blog content was divided into 22 separate topics which appeared permanently as a list down the right hand side of the screen. Whatever content was displayed in the main part of the screen, any topic could be selected and traversed to from the list on the right.

Book capability: The Book has no equivalent functionality with such a combination of immediacy and accuracy; however, it does enable the pages to be flicked through at will; and the contents list at the front allows the page number of a specific topic to be identified and turned to.

2. Links between sections

Blog feature: At any point in the Blog content a link could be inserted to any other Blog Post (though not to specific text within that Post). The links were indicated by specific text being coloured blue.

Book capability: The same text is coloured blue in the Book. In order to provide an equivalent linking capability, the date of the Post being linked to and the page number it is on are included in brackets immediately after the blue text.

3. Links to background in-site material

Blog feature: At any point in the Blog content a link could be inserted to additional material held as a background file in the web site. The file could be of any type that could be displayed – an image, a Word document, a spreadsheet, a PowerPoint presentation etc.. The links were indicated by specific text being coloured blue.

Book capability: The same text is coloured blue in the Book; and the content concerned is included as an Appendix at the back of the book. To provide an equivalent linking capability, the number of the Appendix, its name, and its page number  are included in brackets immediately after the blue text.

4. Links to external web sites

Blog feature: At any point in the Blog content a link could be inserted to a page in another web site. Sometimes the full web address was included in the Post, and at other times some descriptive text was provided. In both cases, however, the text was coloured blue and the relevant HTTP link was associated with it allowing the relevant web page to be immediately visited provided it still existed on the relevant web server.

Book capability: The same text is coloured blue in the Book. Where the HTTP link is provided in the Post then no further text is included in the book. However, where descriptive text is provided in the Post, then the full HTTP link is spelled out in brackets in the form, ‘see http.xxxxx’. To visit the page concerned a reader would have to type the HTTP address into a browser.

5. Enlargement of text and images

Blog feature: Browsers provide functionality to enlarge both text and images. This is of particular use to people who have poor eyesight; and to those wishing to see greater detail in some of the images included with the text.

Book capability: Books have no such integral functionality. Readers have to employ glasses or magnifying glasses to see enlarged text or images. I don’t know for sure whether greater detail and clarity can be achieved with browser magnification or with magnifying glasses on print, however, a comparison of the screen and the printed page version of one of the images (on page 713 of the Book) indicates that much definition is lost in the printing process.

6. Categorisation changes

Blog feature: Current topics in the Blog are listed under the heading ‘Journeys in progress’; whilst completed topics are moved under the heading ‘Journeys KCompleted’ (the inclusion of a K at the beginning of ‘Completed’ is simply to ensure that Completed Journeys  was lower down the alphabet than Journeys in Progress and therefore would appear underneath the list of Journeys in Progress  – I wasn’t prepared to waste further time figuring out how to achieve this in WordPress/html).

Book capability: The Book reflects the status of the web site at a particular point in time and therefore doesn’t need to have this capability. However, this really glosses over a key, fundamental, difference between a Blog and a Book. The blog is a dynamic entity – it can keep changing; whereas a Book has fixed contents. Of course, a Book’s contents can be added to by handwriting in additional material; and the contents of a Book can be read in different orders if appropriate signposting is provided. For example, this particular book could be read in the order that the Contents are listed, or in the order of the entries shown in the Timeline section – though this latter approach would be rather laborious since it would involve a lot of leafing through the Book. Overall, however, a Book simply does not have the Blog’s ability to be changed.

7. Addition at will

Blog feature: New Topics, new Posts within a Topic, and new material within a Post can be added to a Blog at will. In some circumstances this may be considered advantageous. However, it also means that readers cannot be sure that what they have already read is the latest material. There is no feature to highlight what is new.

Book capability: As described in item 6 above, a Book simply does not possess the Blog’s ability to be changed. However, readers can be secure in the knowledge that once they have read the Book they know what it contains and have finished what they set out to do.

8. Updating at will

Blog feature: The contents of a Post can be updated at will, though, as described in 7 above, this may leave readers feeling uncertain about the contents. There is no feature to highlight what has changed.

Book capability: As described in item 7 above, a Book simply does not possess the Blog’s ability to be changed; however, at least readers know that once they have read the Book they know what it contains and have finished what they set out to do.

9. Correction at will

Blog feature: Corrections of typos, poor grammar, and factual errors, can be made to the contents of a Post at will. There is no feature to highlight what has changed, though this perhaps is only of concern for the correction of factual errors – readers will not be interested in corrections to typos or poor grammar.

Book capability: Although corrections can be made by hand on the Book’s pages, the handwriting is likely to detract from the book’s appearance.  As described in item 7 above, a Book simply does not possess the Blog’s ability to be changed. However, at least readers know that once they have read the Book they know what it contains and have finished what they set out to do.

10. Device display variability

Blog feature: The Blog may be read on a variety of different devices including a large screen, a laptop screen, a tablet, and a mobile phone.  Not only are the sizes of the screens on each of these devices different; but they are likely to be employing different browser software to display the pages. These differences mean that a Blog may appear to be significantly different from one device to another. For this particular Blog, the list of topics down the right hand side is transposed to the bottom of narrower screens, which makes it significantly more difficult for users to navigate the material. Furthermore, for users who are not familiar with the site and its contents, may simply not be aware that the list of topics exists and so may feel they are lost without any signposts in a morass of text.

Book capability: There is no such variability with the Book. It is what it is. What you see is what you get. Everyone who reads it gets the same physical experience. From this perspective the Book is considerably more reliable than the Blog.

11. Copying capability

Blog feature: All parts of the Blog can be copied and then pasted into other applications such as a Word document. There are limits as to how much can be copied at once – only the material in a single screen can be copied in one go. However, multiple screens can be copied separately and then stitched together in the receiving application.

Book capability: The Book’s pages can be copied and/or scanned individually or in pairs – though the way the book is assembled will probably preclude the pages being laid flat on the copy/scan platen which could result in a slightly blurred image towards the edge of the spine.

12. Visibility

Blog feature: The Blog is invisible in the huge black hole of the internet. It only becomes visible when people put it in their browser bookmarks, receive notifications of new entries, or see references to it in other electronic or paper documents.

Book capability: The Book will be very visible on a bookshelf in the house it will reside – more so because of its unusually large size – but it will only be visible to a very few people.

13. Accessibility

Blog feature: The Blog is accessible from all over the world provided that its web address is known or that individuals can find the address by using a search engine such as Google. However, this may not be so easy for a small scale web site with a title containing a very commonly used phrase – Order From Chaos (though it’s easier for those inquisitive enough to try the initials OFC).

Book capability: The Book will be immediately accessible to only those in the house where it resides (though this is an extreme case because only one copy of the book will be printed; normally, books have larger print runs and therefore would be accessible to more people). If other people get to know about the Book and want to read it, they would have to request its loan from the owner and make arrangements to obtain it.

14. Storage capability

Blog feature: The Blog takes up no physical space in its own right, and, being of a relatively small digital size, takes up negligible electronic space.  However, a fee has to be paid every year to the organisation that hosts it, and the owner has to have a certain amount of technical knowledge to maintain it in its storage facility (to add new material, update versions of WordPress and its Plug-ins, and to review comments). A copy of the Blog can be obtained from the hosting site in the form of a large zip file. However, I’ve no idea if it would be possible to reconstitute this into a viable web site in a different computing environment, some years downstream.

Book capability: The Book takes up an appreciable amount of bookshelf space – more than usual due to its very large size. However, other than making space for it on the bookshelf and placing it there, there is nothing further to do to store it – and it will remain there intact for many years. Moving it to another bookshelf or other storage facility will not be difficult.

 

Given all the above comparisons, it seems that there is no clear answer to the question of whether the Book has been able to successfully replicate the Blog. The two entities are clearly different animals – the Blog is a dynamic vehicle accessed in a variety of devices; whilst the Book provides a point-in-time snapshot in a standard, well understood, format. The Book probably presents the material in a broadly comparable way, even if it facilitates cross referencing in a rather slower and more cumbersome way. The Blog is hugely more widely accessible and visible, but is much more complicated to store. Regarding longevity, instinct says that the Book’s chances are much better than the Blog’s over the coming decades

Bookfold experiences

This morning I finished printing the 52 sixteen page sections (four A4 pages printed landscape and double sided and then folded in half) and what a pile they make – just over 9cm.

Unfortunately, the 100 gsm paper I was hoping to have used would have shown the text through from the reverse of the page. Instead, I ended up with 130 gsm paper which is normally sold in large sheets, but which George Davidson’s supplier kindly cut down to A4. I got 250 sheets for £15 which is a really excellent price, and which gave me a 42 spare page cushion in case things went wrong in the course of printing – which, of course, they always do. In this case, I had four hiccups:

  • It seems that images in PNG format upset the printing of documents using Bookfold page setup. They cause  adjacent text to be printed on the other half of the page. I wasn’t aware of this problem and was only able to confirm that was the cause when I replaced the PNG image with the same image in JPG format. The first time it happened I had to reprint all four pages. After that I was careful to check in Print Preview mode and was able to fix two other instances without wasting any paper.
  • I used about two and a half Canon 3550 ink jet cartridges in the course of the print, and because the printer can’t provide an accurate indication of when the ink is about to run out, I elected to just print until the quality deteriorated. This happened twice so on those occasions I lost at least two or more sheets of paper.
  • One of the Appendices was a document with a contents page in which the page numbers had been automatically generated. No problem had been apparent when I edited this page, however when this page printed it produced extra lines stating ‘Error! Bookmark not defined’ for the last 4 items on the contents list. This had a knock-on effect on all subsequent pages and extended the printing of this section onto a seventeenth page. Fixing the problem was simply a matter of removing all the page numbers from this Contents list and reprinting – however, I lost four pages in the process.
  • The final cause of paper wastage was typical human error: I decided I would print a later section while trying to fix one of the problems already mentioned; and the distraction of trying to find a solution caused me to lose track of where I was up to and to print the same section twice – another four pages down the swanee.

Anyway, despite these problemettes, I still ended up with 12 spare pages; but it is a salutary reminder that it is essential to have a good supply of spares (paper and ink) when embarking on a substantial print run.

In the course of this exercise I’ve learnt a lot more about the Bookfold Page Setup in Microsoft Word and how to manage its printing. As already mentioned, with Bookfold selected, Word enables you to create text on pages which are half the width of a landscape A4 page. It is possible to create all the pages in a single file and to use Page Setup to specify how many pages each section/booklet should have (each section/booklet is sewn separately into the book’s text block).  However, I prefer to have my sections in separate files because a) I haven’t been able to get the printer I use to do duplex printing successfully when using the Bookfold Setup – the reverse pages are printed upside down (the solution is described below); and b) I find it easier to manage the edit and print processes in small chunks, despite the need to ensure continuity of text and page numbers from one file to another.

To print with the Bookfold Page Setup I’ve been using the standard settings that come up (Print All, A4 etc.) with the exception of specifying the following settings in the print dialogue boxes:

  • Manual duplex
  • Preview before printing
  • Orientation – Landscape
  • Print quality – High

On selecting ‘Print’ this arrangement results in a preview window being displayed which allows you to view the front side of each of the four pages. If there appears to be a problem this is the point to Cancel out and to take whatever remedial action is required. However, if all looks good, selecting ‘OK’ will result in the front side of the four pages to be printed. This is the point at which you need to enact the manual duplex procedure: take the four pages out of the printer and place the top page on one side at the bottom of a new pile. Take the next page and place it on top of the new pile. Do the same for each of the third and fourth pages. Then place the new pile, facing in the same direction, into the page feeder tray and press OK on the dialogue screen shown below.

When the reverse sides of the pages have been printed take them from the printer and place the top page on one side at the bottom of a new pile. Take the next page and place it on top of the new pile. Do the same for each of the third and fourth pages. If you take the new pile and fold it over you should find that the 16 pages are in the correct order. I’m constantly amazed that this does actually work – but it really does.

Specifying ‘Preview Before Printing’ provides a valuable opportunity to check that all is well before committing to the print run. Unfortunately, the Preview only displays the front sides of the pages, so that a problem on the reverse of the pages could waste a lot of paper. However, this can be avoided by checking the Preview of the reverse sides before setting the print run going. If a problem is spotted, the print can be cancelled and the problem fixed. Then, with the problem-free front pages in the paper feed tray, the whole print run can be started again but, this time, the front side print should be cancelled in the main Print screen. However, the ‘remove the printout’ dialogue box will still be present and pressing OK will result in the Preview and Print screens for the reverse pages being displayed. Accepting these print options will result in the reverse pages being printed on the back of the problem-free front pages.

Each of the sixteen-page sections took about 10 minutes to print provided no problems were encountered. After each section was produced it was carefully folded and the crease pressed in. Now the bookbinding work starts with the pricking out of the holes for the thread which will sew the sections together. It’s going to be fascinating to see how such a large number of pages can be turned into a viable book.

Principles, Assumptions, Constraints, Risks

The export utility to move the PAWDOC files out of the Fish document management system and into files residing in Windows Explorer folders, has been completed by the Fish supplier, m-Hance. Broadly speaking, it will deliver files with a title which starts with the Reference Number; then has three spaces followed by the file description that I originally input to Fish (truncated after 64 characters); and ending with the date when the file was originally placed in Fish. I have already received the utility documentation which provides full instructions of how to install and run it and am confident I know what to do. So all that remains is for me to receive the utility (which I expect early next week) and to give it an initial test run on the PAWDOC collection in Fish.

I’ve already created a full draft of the Project Plan Description document and the Project Plan Chart, so the test run will inform me of any final changes that I need to make to the plan. After that, all that will be left to do is to fix an overall start date and then to insert the start and end dates for each task.

One part of the Project Plan Description that was of particular interest to construct was the section on Principles, Assumptions, Constraints and Risks. Since some of them really require expert digital preservation knowledge and experience – commodities which I don’t have – I’ve sent these out to my colleagues Matt Fox-Wilson, Jan Hutar, and Ross Spencer in the hope that they will let me know of any serious errors of judgement that I may have made. The text of the section I sent them is shown below:

Principles

The Principles below have been followed in the construction of this Project Plan, and will be applied throughout the performance of the project:

  • No action will be taken which will increase the cost or effort required to maintain the collection
  • Backup, disaster recovery and process continuity arrangements are considered to be significant factors in ensuring the longevity of a collection and will therefore be included as an integral part of this preservation project plan.
  • All Preservation actions on individual document files will be undertaken after the files have been transferred out of Fish into stand-alone files in Windows folders, so that a substantial number of transferred documents will be subjected to detailed scrutiny thereby improving the chances of identifying any generic errors that may have occurred in the transference process.

Assumptions

The Assumptions below have been followed in the course of constructing this Project Plan.

It is assumed that:

  • The analysis of the files remaining in Fish after the ‘Export and Delete’ utility has been run, will take no longer than three weeks elapsed time.
  • There is no publicly available mechanism to convert Microsoft Project (.mpp) files earlier than version 4.0.
  • There is no publicly available mechanism to convert Lotus ScreenCam (.scm) files produced earlier than mid 1998.
  • Application and configuration files that were included in the collection do not need to be able to run in the future as they do not contain content information. The mere presence of the files in the collection is sufficient.
  • The zipping of a website is currently the easiest and most effective way of storing it and providing subsequent easy access.
  • Versions of Microsoft Excel Word from 1997 onwards are not in immediate danger of being unreadable and therefore require no preservation work. Earlier versions are best converted to the latest version of Excel that is currently possessed – Excel 2007.
  • Versions of Microsoft Word for Windows from 6.0/1995 onwards are not in immediate danger of being unreadable and therefore require no preservation work. Earlier versions, including those for Macintosh, are best converted to the latest version of Word that is currently possessed – Word 2007.
  • Versions of Microsoft PowerPoint from 1997 onwards are not in immediate danger of being unreadable and therefore require no preservation work. Earlier versions, including those for Macintosh) are best converted to the latest version of PowerPoint that is currently possessed – PowerPoint 2007
  • None of the versions of HTML, including those pre-dating HTML 2.0, are in immediate danger of being unreadable; and therefore no preservation work is required on any of the Collection’s HTML files.

Constraints

This project may be limited by the following constraints:

  • Some of the disks and zipped files in the collection contain huge numbers of files of various types and organised in complex arrangements. To address the preservation requirements of these particular items could delay the project indefinitely. Therefore no attempt will be made to undertake preservation work on these items; but, instead, a note will be included in section 3 of the Preservation Maintenance Plan (Possible future preservation issues).
  • Disks that can’t be opened must remain in the Collection in physical form only.
  • No automated tools are available for undertaking conversions of large numbers of files; and the use of macros has been discounted as being too error-prone and risky. Therefore, all the Preservation work defined in this Project Plan has to be undertaken manually by a single individual.

Risks

There is a risk that:

  • The Zamzar service may be unable to convert some of the files submitted to it, despite tests having been completed successfully.
    Mitigation: record the need to take further actions on specified files in the future, in section 3 of the Preservation Maintenance Plan
  • The analysis of the files remaining undeleted after the Fish file export has taken place, may throw up unexpected issues and may take much longer than anticipated. Mitigation: After two and a half weeks work on this activity, the issues will be recorded in a document, and the need to address the issues in the future will be recorded in section 3 of the Preservation Maintenance Plan.

The slog of the blog book

I’m pushing ahead with the book of the blog. Having established a cut-off date for the end of 2017, I made sure that I cleared away two of my long-standing journeys (OFC and Roundsheet) by the deadline, and ended up with about 350 pages of blog posts. That’s when the grind really started and I had to go through all of them, separating them into 16 page sections ready for bookbinding. As I went through I was ensuring that the background documents accessed from links in the blog were reproduced in full in an Appendix. This was a major exercise which eventually produced a further 465 pages – all of which in their turn had to be separated into 16 page sections.

I now have 52 separate sixteen page sections, and another final section which is growing as I edit each section one last time and assemble the index and the timeline (a list of post titles in date order). In this final edit I’m also ensuring that the cross-post links and the links to Appendix documents are all consistently formatted and include the correct page number to elsewhere in the book. I decided to do this because it is the effortless ability to jump between links, and the absence of any particular space constraints, that distinguishes electronic systems from paper books – and I have taken advantage of both features extensively in the blog. So, when I decided to reproduce the blog in book form, I was determined to try to match those capabilities to the greatest extent I could. Hence, ALL the background documents have been included; and every cross reference includes a page number that goes straight to the relevant content. The only links that don’t have a page number reference are those to material elsewhere in the net which is produced by other people – I rationalised that a blog book should only include material produced by the owner of the blog.

The inclusion of linking page numbers and the creation of the index and timeline are making the final edit a slow process which may take a couple of weeks. In the meantime, I’ve been thinking about the type of paper I should use to print the book. Having assembled all the text, I can see that, if I used the same paper as I used for the ‘Sounds for Alexa’ book, the text block would be 5.5 times the thickness of the Sounds book – some 8.25 cm – a huge tome. The Sounds book was printed on 125 gsm paper, so I tried looking on the net for some thinner bookbinding paper but had no success – specialist A4 bookbinding papers sold in packs as opposed to single sheets, seem to be few and far between and I didn’t come across any that were thinner than 125 gsm. I discussed this with George Davidson, my tutor on the Bookbinding course at the Bedford Arts and Crafts Centre, and he said he would investigate a 100 gsm paper with one his regular suppliers and suggested that it might be feasible to buy a paper in larger sheets and cut them down to A4. In the meantime, I will continue to plough through the final editing of the 50+ sections.

A cursory tour of web archiving

Web archiving isn’t a simple proposition because not only do web sites keep changing, but they also have links to other sites. So, I guess I should have expected that my search for web archiving tools would come up with a disparate array of answers. It seems that the gold-plated solution is to pay a service such as Smarsh or PageFreezer to periodically take a snapshot of a website and to store it in their cloud. The period is user-definable and can be anything from every few hours to every month or year. Smarsh was advertising its basic service at $129 a month at the time of writing.

A more basic, do-it-yourself facility, is the Unix WGET command line function for which a downloadable Windows version is available. This enables all sorts of functions to be specified including downloading parts or all of a site, the scheduling of downloads etc.. However, as you might expect with a Unix function, it requires the user to input programming-type commands and to be aware of a large number of specifiable options.

More limited services such as Archive.is are available to capture, save and download individual pages – and some of these are free to use.

Regarding formats in which web archives can be saved, the Library of Congress’ preferred format is the ISO WARC (Web ARChive) file format. However, I was unable to find any tools or services which purport to store files in this format: it sounds like WARC is being used in the background by large institutions who are trying to preserve large volumes of web content. Interestingly the web hosting service I use for the this blog actually offers backups in various forms of zip files; and indeed, it is zip files that I have used in the past to store web sites that are included in my document collection.

Based on this very quick and certainly incomplete tour of the topic of Web Archiving, I’ve decided I won’t be trying to do anything fancy or different in the way I use technology to archive my old web sites. The zip format has worked well up to now and I see no reason to change that approach. As for a non-technological solution to web archiving, the notion of creating and binding a physical book of the first five years of this OFC web site is becoming more and more attractive. There’s something very solid and immutable about a book on a bookshelf. I’m definitely going to do that, and have set the end of 2017 as the cut-off date for its contents – I’m busy trying to make sure that the Journeys are all at appropriate stages by the 31st December.