Getting an HTTrack Copy

HTTrack is a free-to-use website copier. Its web site provides the following description:  “It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure. Simply open a page of the “mirrored” website in your browser, and you can browse the site from link to link, as if you were viewing it online.”

I downloaded and installed HTTrack very quickly and without any difficulty, then I set about configuring the tool to mirror pwofc.com. This involved simply specifying a project name, the name of the web site to be copied, and a destination folder. The Options were more complicated and, for the most part, I just left the default settings before pressing ‘Finish’ on the final screen. There was an immediate glitch when I discovered that I had not provided the full web address (I’d specified pwofc.com instead of http://www.pwofc.com/ofc/); but having made that change, I pressed ‘Finish’ again and HTTrack got on with its mirroring.  Some 2 hours 23 minutes and 48 seconds later, HTTrack completed the job, having scanned 1827 links and having copied 1538 files with a total file size of 212 Mb.

The mirroring had produced seven components: two folders (hts-cache and www.pwofc.com) and 5 files (index, external, hts-log, backblue and fade).  The hts-cache folder is generated by HTTrack to enable future updates to the mirrored web site; the external file is a template page for displaying external links which have not been copied; backblue and fade are small gif images used in such templates; and the log file records what happened in the mirroring session. The remaining wwwpwofc.com folder and index file contain the actual contents of the mirror.

On double clicking the Index file, the pwofc.com home page sprang to life in my browser looking exactly the same as it does when I access it over the net. As I navigated around the site the internal links all seemed to work and all the pictures were in place, though the search facility didn’t work. External links produced a standard HTTrack page headed by “Oops!… This page has not been retrieved by HTTrack Website Copier. Clic to the link below to go to the online location!” – and indeed clicking the link did take me to the correct location (I believe it is possible to specify that external links can also be copied by setting the ‘Limit’ option ‘maximum external depth’ to one, but my subsequent attempt to do so ended with errors after just two minutes; I abandoned the attempt). The only other noticeable difference was the speed with which one could navigate around the pages – it was just about instantaneous. From this cursory examination I was satisfied that the mirror had accurately captured most, if not all, of the website.

An inspection of the log file, however, identified that there had been one error – “Method Not Allowed (405) at link www.pwofc.com/ofc/xmlrpc.php (from www.pwofc.com/ofc/)”. According to the net, a PHP file ‘is a webpage that contains PHP (Hypertext Preprocessor) code. … The PHP code within the webpage is processed (parsed) by a PHP engine on the web server, which dynamically generates HTML’. Interestingly, I wasn’t aware of having any content with such characteristics, but, on closer inspection of the files in my hosting folder, I found I had lots of them – probably hundreds of them. I tried to figure out what the error file related to but had no clue other than its rather striking creation date – 23/12/2016 at 00:00:00 – the same date as several of the other PHP files. I had not created any blog entries on that day, so my investigation ground to a halt. I don’t have the knowledge to explore this, and I’m not prepared to spend the time to find out. My guess is that the PHP files do the work of translating the base content stored in the SQL database into the structured web pages that appear on the screen. I’m just glad that there was only one error – and that its occurrence isn’t obviously noticeable in the locally produced web pages.

The log file also reported 574 warning which came in the form of 287 pairs. A typical example pair is shown below:

19:31:13        Warning:    Moved Permanently for www.pwofc.com/ofc/?p=987 19:31:13        Warning:    File has moved from www.pwofc.com/ofc/?p=987 to                                           http://www.pwofc.com/ofc/2017/06/29/an-ofc-model/

I tried to find a Help list of all the Warning and Error messages in the HTTrack documentation but it seems that such a list doesn’t exist. Instead there is a Help forum which has several entries relating to such warning messages – but none that I could relate to the occurrences in my log. As far as I can see, all of the pages mentioned in the warnings (in the above instance the title of the page is ‘an-OFC-Model’), have been copied successfully so I decided that it wasn’t worth spending any further time on it.

All in all, I judge my use of HTTrack to have been a success. It has delivered me a backup of my (relatively simple) site which I can actually see and navigate around, and which can be easily zipped up into a single file and stored.

A Backup Hosting Story

In the last few days I’ve been exploring making backup copies of this pwofc Blog using the facilities provided by the hosting company that I employ – 123-Reg. It was an instructive experience.

When I first set up the Blog in 2012 I had deliberately decided to spend a minimal amount of time messing around with the web site and to focus my energies on generating the stuff I was reporting in it. Consequently, most of my interactions with the hosting service had involved paying my annual fees, and I had little familiarity with the control panel functions provided to manage the web site. In 2014, I had made some enquiries about getting a backup, and the support operation had provided a zip file which was placed in my own file area. Since then I had done nothing else – I think I had always sort of assumed that, if something went wrong with the Blog, the company would have copies which could be used to regenerate the site.

However, when I asked the 123-Reg support operation about backups a few days ago, I was told that the basic hosting package I pay for does NOT include the provision of backups – and the company no longer provides zip files on request: instead, facilities are provided to download individual files, to zip up collections of files, and to download and upload files using the file transfer protocol FTP. Of these various options, I would have preferred to just zip up all the files comprising pwofc.com and then to download the zip file. However, the zipping facility didn’t seem to work and, on reporting this to the 123-Reg Support operation, I was told that it was out of action at the moment… So, I decided to take the FTP route.

I duly downloaded the free-to-use FTP client, FileZilla, set it up with the destination host IP Address, Port No, Username and Password, and pressed ‘Connect’. After a few seconds a dialogue box opened advising that the host did not support the secure FTP service and asking if I wanted to continue to transfer the files ‘in clear over the internet’. Naturally I was a little concerned, closed the connection, and asked 123-Reg Support if a secure FTP transfer could be achieved. I was told that it could be and was given a link to a Help module which would explain how. This specified that a secure transfer requires Port 2203 to be used (it had previously been set to 21), so I made the change and pressed ‘Connect’ again. Nothing happened. A search of the net indicated that secure FTP requires a Port No of 22, so I changed 2203 to 22 and, bingo, I was in.

FileZilla displays the local file system in a box on the left of the screen, and the remote file system (the pwofc.com files in this case) in a box on the right. Transferring the pwofc files (which comprise a folder called ‘ofc’, a file called ‘index’, and a file called ‘.htaccess’) was simply a matter of highlighting them and dragging them over to a folder in the box on the left. The transfer itself took about 12 minutes for a total file size of 246 Mb.

Of course, the copied files on my laptop are not sufficient to produce the web pages: they also require the SQL database which manages them to deliver a fully functioning web site. If you double click the ‘Index’ file it just delivers a web page with some welcome text but no links to anything else. Hence, these backup files are only of use to download back to the original hosting web site for the blog to be resurrected if the original files have become corrupted or destroyed. I guess they could also, in principle, be used to set up the site on another hosting service – though I have no experience of doing that.

Of course these experiences only relate to one customer’s limited experience of one specific hosting service and may or may not apply generally. However, they do indicate some general points which Blog owners might find worth bearing in mind:

  • Don’t assume that your hosting service could regenerate your Blog if it became corrupted or was destroyed – find out what backup facilities they do or don’t provide.
  • Don’t assume that all the functions provided by your hosting service work – things may be temporarily out of action or may have been superceded by changes to the service over the years.
  • Remember that a backup of the website may be insufficient to regenerate or move the Blog – be clear about what additional infrastructure (such as a database) will be required.
  • If you want to be able to look at the Blog offline and independently of a hosting service, investigate other options such as creating a hardcopy book, or using a tool such as HTTrack (which is discussed in the following entry).

ST’s Alternative Approaches

About 6 weeks ago (on 6th March), Sara Thomson of the Digital Preservation Coalition kindly spent some time on the phone with me discussing the archiving of web sites. I wanted to find out if there were any other solutions to the ones I had stumbled across in my brief internet search some 16 months ago. Sara suggested 3 approaches which were new to me and described them as follows in a subsequent email:

  1. UK Web Archive (UKWA) ‘Save a UK Website’: https://beta.webarchive.org.uk/en/ukwa/info/nominate Related to this – two web curators from the British Library (Nicola Bingham and Helena Byrne) presented at a DPC event last year discussing the UKWA, including the Save a UK Website function. A video recording of their talk along with their slides (and the other talks from the day) are here: https://dpconline.org/events/past-events/web-social-media-archiving-for-community-individual-archives
  2. HTTrack: https://www.httrack.com/  I gave a brief overview of HTTrack at that same DPC event last year that I linked to above. I have also included my slides at an attachment here – the HTTrack demo starts on slide 15.
  3. Webrecorder: https://webrecorder.io/ by Rhizome. Their website is great and really informative, but let me know if you have any questions about how it works.

Shortly after this, I followed the link that Sara had provided to the UKWA nomination site and filled in the form for pwofc.com. On 14th March I got a response saying that the British Library would like to archive pwofc.com and requesting that I fill in an on-line licence form which I duly completed. On 16th March I decided to explore the contents of the UKWA service and found it collects ‘millions of websites each year and billions of individual assets (pages, images, videos, pdfs etc.)’. I started looking at some of the blogs. The first one I came across was called Thirteen days in May and was about a cycling tour – but it seemed to lack some of the photos that were supposed to be there. The next two I looked at, however, did seem to have their full complement of photos; and one of them (called A Common Reader) had a strangely coincidental entry about ‘Instapaper’ which provides what sounds to be a very useful service for saving web sites for later reading. It looks like the UKWA does an automated trawl of all the websites under its wing at least once a year, so I guess that, as a backup, it should never be more than a year out of date.

An hour after completing this exploration, I got an email confirming that the licence form had been submitted successfully and advising that the archiving of pwofc.com would proceed as soon as possible but that it may not available to view in the archive for some time due to the many thousands of web sites being processed and the need to do quality assurance checks on each. Since then, I’ve been checking the archive every now and again, but pwofc.com hasn’t emerged yet. When it does, it’ll be interesting to see how faithfully it has been captured.

Regarding the other two suggestions that Sara made, I’ve decided to discount Webrecorder as that entails visiting every page and link in a website which would just take too much time and effort for pwofc.com. However, I’m going to have a go at using HTTrack, and I’m also going to try and get a backup of pwofc.com from my web hosting service. Having experienced all these various archiving solutions, there’ll be an opportunity to compare the various approaches and reach some conclusions.

The PAWDOC Preservation story

In May 2018 the inaugural digital preservation work on the PAWDOC collection was completed. The story of the work that was done, and the lessons that were learnt, are documented in the following paper which can be downloaded from this site subject to Creative Commons conditions:

The Application of Preservation Planning Templates to a Personal Digital Collection

Instances of the populated preservation planning templates that were used to control the work are also provided:

A summary of the work done and the lessons learned has been published as a Blog Post on the Digital Preservation Coalition (DPC) website.

The preservation planning templates were updated as a result of insights gained in the work and these are available as embedded files in the above ‘Application of Preservation Planning Templates’ paper and also in the DPC website.

Getting started with the Findings

Having initiated a preservation planning regime for the collection, and having moved it onto the Windows 10 platform, I’m feeling that the only remaining things I need to do with it are to find it a permanent home and to write up the findings of this lengthy experiment. I took a step forward on the latter activity earlier this week when I had a very interesting phone call with Peter Tolmie, a UK Ethnographer based in the School of Information Systems and New Media at the University of Siegen in Germany. I was given Peter’s name by Richard Harper when I asked if he knew of anyone who is knowledgeable about how professionals manage their documents and who would be interested in working on a wrap-up paper with me. An initial phone call with Peter last Thursday indicated that we have a great many common interests – I found it a very stimulating conversation indeed. I’ve sent Peter some documents describing the collection and we’ve agreed to talk again on 21st March.

Regarding the search for a home for the collection (which is documented in various posts in this Blog going back to 2015), my current efforts lie in conversations I’m having with Dr James Peters, the Archivist of the National Archive for the History of Computing at Manchester University, who has kindly agreed to help me in my search. In a phone call last month, James told me he was waiting for a response from someone he had emailed, but that, if there was no interest from that source, he could issue a note to a relevant mailing list on my behalf. If it is to be the mailing list route, I’m hoping to get James’ advice on what needs to go in the note.

March: Long and Plans

It looks like the blog post describing the Digital Preservation work undertaken last year on the PAWDOC collection, will be published next month on the DPC website. It will refer to the full paper describing the work in more detail, which will be published here within pwofc.com. At the same time, the preservation planning document templates will be replaced by updated versions in the DPC website.  The publication of all these materials will be a fitting end to the preservation planning activities that are described in previous entries in this site. However, there will still be one thing to do before the topic can be considered complete and that is to review the effectiveness of the Preservation Maintenance Plan template when an instance of it will be used in the PAWDOC Preservation maintenance exercise scheduled for September 2021.

Backup Bolstering

Backing Up has always been an essential part of maintaining my personal document collection; but it was never something I enjoyed – I did it out of a fear of loss. And I have, indeed, experienced loss: in 1996 one of the MO Disks I was using became corrupted and I lost a number of files; in 2004 my laptop was stolen and my whole document collection had to be re-instated from the backups; and in 2017 I had a system crash and, although the repair company was able to recover all my data in that instance, that might not always be the case.

When I was working, I used to take a backup of the more recently created material every month or so, as well as complete versions of the whole collection as it kept growing. This produced multiple copies on many disks which increased my confidence in being able to replace any file that got corrupted or mislaid, but which required managing in its own right as the number of disks grew. As time went by I added other backup mechanisms including storing a copy on another laptop in the househoId, storing a copy on disk in a relative’s house located many miles away, and storing a copy on disk at my son’s house in New Zealand.

After I retired I tried to put the backing up on a more orderly basis and finally fixed on five different types of backup – Cloud, copy on another laptop in the house, local hard disk, remote (in the UK) hard disk, and New Zealand copy on memory stick. I scheduled backups in my iPad calendar for each of these (though, for the Cloud, it was more a matter of checking that it was working and that I could recover from it). However, the iPad calendar doesn’t have a To Do mechanism and I wasn’t looking at the calendar anything like as often as I used to at work. Consequently, I kept missing scheduled backup activities – and, in most cases, didn’t realise I’d missed them; and when I did realise I just kept putting off what was an annoying extra thing to do. One answer would have been to get a To Do app – but I’d had enough of To Dos at work.

The opportunity to come up with an alternative approach, came when I created a Users’ Guide for my document collection in May 2018. I structured the Guide so that it had a Quick Reference Guide to the Collection on the front page, and a Backup Quick Start Guide on the back page. The latter listed the different types of backups to be performed and provided cells to be filled in with a date when that particular backup had been done, as shown below.

This was a definite improvement over dates dotted about a calendar, but unfortunately the schedule was still hidden because the Users’ Guide was tucked away inside an archive storage box.

When I replaced my Windows 7 laptop for a Windows 10 version in December 2018, I decided to review all my backup arrangements again and to try to overcome this lack of visibility. The answer turned out to be really quite simple: I have a display frame for the latest issues of UK postage stamps, on the wall in front of where I sit at my desk. So, I created a table with columns for when backups have been done and when they are due; and this table now resides in the display fame as shown below.

I have a clear view of when the next backups are due every time I sit down at my desk. The next time I miss a backup it’ll be because I just don’t enjoy doing them, not because of blissful ignorance!

Portfolio boxes for physical objects

This is an example of how the construction of a multi-purpose portfolio case can be used to store, display and describe physical mementos and other objects.

About 40 years ago I acquired a paperback copy of the I Ching – the Chinese book of change which provides a guide to divination or prediction of the future. The inside cover of this book notes that it was written in 1000 BC, is probably the oldest book in the world and is the most powerful distillation of Chinese wisdom. The divination method is to hold 50 sticks upright in a bundle and to allow them to fall randomly, and the text assists the reader to interpret the resulting positions of the sticks.

The book instructs that the fifty divining sticks should be yarrow stalks which should be stored in a lidded receptacle which is never used for any other purpose; so I duly collected yarrow sticks from a rural verge side and placed them in a terracotta lidded jar. I only used the I Ching a few times – and still have the notes I made on two of those occasions. The book ended up on a bookshelf and the terracotta lidded jar mostly resided on the bedroom window sill of the various houses I lived in.

In 2018, as part of my effort to eliminate all paperbacks from my bookshelves, I decided that I would convert the paperback to a hardback book and, at the same time, to unite the sticks with the book. This was achieved by first turning the paperback into a hardback and including the two sets of notes at the back of the book. The inside sleeves of the cover were used to document the story of the collection of the yarrow sticks, my use of the I Ching, and the creation of a folding portfolio case for both.

Then a case for the book was created as shown below.

Next a box for the sticks was created with thin magnets in the flap and in the side of the case, to secure the flap.

Then a surrounding cover was created onto which the case and the box were glued. Thin magnets on the top of the case and the top of the box help to keep the structure in place.

Finally a dust jacket was created and the story of where the yarrow stalks came from and where they had previously resided, with photos, was documented on the back cover.

Clear Blue Calm Water

Unfortunately, the paper summarising the PAWDOC digital preservation work has not progressed in the last few months because the DPC has too much work on at the moment to deal with it. I’m hoping this might change in the early part of 2019.

In the meantime, I have just completed another important aspect of digital preservation work on the PAWDOC collection. I have long been concerned that the collection resides on a laptop running Windows 7 – an operating system for which Microsoft have said they will withdraw support in 2020.  At the same time, the battery in my existing laptop no longer functions so requiring that it be mains-connected at all times. So, about a week ago I acquired a Chillblast Leggera i7 Ultrabook with 8Mb of RAM and a 1Tb Samsung Solid State Drive (SSD). I listed a set of conversion activities and started working my way through attaching peripherals (keyboard, mouse, scanner) and loading software (Anti-virus, Scanning, Filemaker, MS Office, Cloud backup). All went well until nearly at the end when I hit the wall of connecting the external Dell 2405FP monitor which I bought in 2006, and which has worked fine ever since with at least three different laptops.

I had planned to use the laptop’s HDMI port and had acquired an HDMI to DVI adapter to enable an HDMI cable to be plugged into the Dell monitor’s DVI port. Unfortunately, the connection only worked for a few minutes. After that the monitor’s DVI interface went into Power Save Mode and, no matter what I tried, I couldn’t get it out of that mode. I then tried searching the net for a fix and discovered a huge number of entries about this problem for several different models of Dell monitors stretching back to 2005 – with no definitive fix emerging. I decided to try using the VGA port on the Dell monitor and duly purchased an Amazon next day delivery of an HDMI to VGA converter. Unfortunately, this simply had the similar effect of putting the monitor’s VGA interface into Power Save Mode.

However, a ray of hope did appear when I plugged the VGA lead back into my old laptop, and the Dell monitor immediately came out of Power Save Mode and the screen image was displayed. I was able to obtain the monitor menu while it was attached to the old laptop and returned the monitor back to factory settings – but this didn’t make any difference – everytime I attached the laptop’s HDMI port to either the monitor’s DVI or VGA interfaces they returned to Power Save Mode.

My last ditch effort to resolve the problem was to try using the laptop’s Mini Displayport (MD) port, and, in a state of some depression and resignation, yesterday I duly purchased an Amazon same day delivery of an MD to VGA adapter plug.  It cost £5.99, was ordered around 9am and was delivered around 8pm (really…). With the laptop switched off, I put the adapter into the laptop’s MD port and plugged in the monitor’s VGA cable. The buttons on the monitor went orange (signifying Power Save Mode) and I thought, ‘here we go again’ and switched on the laptop; and suddenly after a few seconds I saw a bright light out of the corner of my eye and, blow me down, there was the laptop screen on the monitor! I used it for a while and then, trepidatiously, tried closing the laptop lid and it kept on displaying on the monitor. Later, I shut the laptop down and subsequently fired it up again – but still no problem – up it came on the monitor. So it looks like this is now working OK. Phew.

This morning I reorganised my physical desktop and placed the new smaller laptop in a new position immediately next to my scanner so that the problem of making the scanner cable reach the laptop port was eliminated. With the conversion process complete and my desk back in some sort of order, I began to feel more in control of things and much more relaxed. I had sailed into clear blue calm water in the sheltered bay of an up to date operating system and a modern laptop.

From Nottingham to Manchester

Last month I heard back from the keeper of Manuscripts and Special Collections at the University of Nottingham, Mark Dorrington, who said that my collection may not be a good fit with their archives and that, in any case, they were not geared up to deal with such a large digital collection. However he did suggest trying the National Archive for the History of Computing at the University of Manchester and provided a link to its web page.

I have, in fact, already been round the houses with the University of Manchester Library; however, that was not specifically in relation to this particular archive, and it was before I had done any digital preservation work on the collection. So, today I tried making contact with someone specifically concerned with this particular archive and was told that the archivist for this and a number of other special collections is Dr. James Peters. I duly emailed him with the following opening para: ” Dr. Peters, I’m contacting you as the Archivist in charge of the National Archive for the History of Computing (NAHC). I have a collection of documents which reflect the development and application of computers over the last 40 years, and would be grateful for your advice as to whether the collection has any merit and where it could be placed.” I followed this with a description of the background to the collection and of its contents. I’m hoping that my rather indirect approach on this occasion might engender some discussion rather than the outright rejection which I’m becoming used to.