My Life in a Book

In December 2024, I was given a rather unusual birthday present by my daughter and her husband. It was a subscription to an internet service called My Life in a Book. The service sends you a different question to “help you reflect on key moments in your life” every Monday for a year; and provides a web site in which you can write your answers (which can include images as well). After 52 weeks you tidy up your manuscript, select and edit a cover design, and then press ‘Print’ to have your stories “beautifully bound into a cherished keepsake book”. The gift included 1 physical copy of the book.

After receiving an introductory email advising me of the gift, I duly received my first question the following Monday. It read “ Hi Paul. Your question of the week is ready” and provided a button to take me to the writing web site and the question “What are your favourite childhood memories?”. This was the pattern for the following 12 months (I believe that the person who buys the subscription selects the questions from a pick-list). Sometimes I was too busy to answer straight away, or simply wanted some time to think about my answer, and in these cases I either did not respond to the email until I was ready; or else I accessed the question, inserted some placeholder text, and labelled it as ‘draft’. I found some of the questions really quite hard to answer; for example, “What were your greatest fears about becoming a parent?” or “How do you navigate decision-making when confronted with uncertainty or fear?”. Initially, I dealt with these by selecting the option to skip a question but, as the year wore on, I thought better of it on the assumption that any honest answer – even along the lines of ‘I don’t know’ – would be worthwhile. However, if I had skipped a question I could have simply replaced it using the facility to create your own questions at will (the answers are essentially text blocks which don’t have to relate to a question – they can be sections of any kind – Forward, Contents, Introduction, Index etc.).

The editing facilities in the writing platform have clearly been designed to help people unfamiliar with word processing systems, to produce their answers: the margins, font, and font size are all predefined with no choice. However, bold, and italics can be specified for selected text. The facilities to edit imported images are also limited: there are three size options – small, medium and large – and the ability to crop. This overall limitation of choice is quite refreshing, relieving the writer of having to take a variety of actions.

The final version of the book is produced as a PDF file with page numbers, which can be reviewed at will. Unfortunately, this disconnect between the editing facility and the PDF version does mean that, while you are writing, you can’t be certain whether imported images will fit onto the bottom of a page or will get moved to the following page leaving a large gap. To check, the PDF has to be generated which, in December 2025, for my book of around 230 pages, was taking at least 40 seconds and sometimes a lot more (I’m guessing that response time is dependent on system load and that a lot of subscriptions were coming due around Christmas). A further annoyance was that, for a reason I don’t understand, the changes I made to image size didn’t seem to appear until I had generated the PDF version for a second time. Hence to ascertain whether an adjusted image was going to fit onto the bottom of a page was taking me around a minute and a half or more – very frustrating, especially when finding that the adjustment you have made was insufficient and the image is still being pushed to the following page leaving a large gap. A fourth option – “Fit at bottom of current page” – to go with the small, medium and large image sizes would really improve the system’s usability.

Other than the issue described above, I found the system generally easy to use and flexible enough to include whatever content you want. For example, although each answer comes with a suggested heading, the user can change the heading text at will. I used that ability to add numbers to each heading and then created a Contents page (which is not automatically generated). I also created a Preface section.

Having completed the content of the book, the web site guides you through a completion process which first advises detailed checking of the contents using the preview PDF (essential, as bitter experience has proved to me it is almost impossible to spot and remove all typos, grammatical errors and factual mistakes from the draft of a book). I was then asked to choose a template book cover from a choice of several dozen designs; and to supply a title, author and image, which were automatically included in the template. When I was satisfied with the book cover, I was taken into the ordering process where I specified where I wanted the book to be sent, and the number of copies I wanted (whoever bought you the gift will have paid for one or more printings, however additional copies can be purchased). That was the 10th of December; then it was time to sit back and wait for delivery. I received confirmation that the book had been printed and shipped just two days later; and it was delivered by Royal Mail 5 days after that on 17Dec – which I thought was an impressively fast turnaround.

The book itself is around A5 size and looks quite good. The text block appears to be secured to the case only by the end papers so I’m not so sure how long-lasting the joint will be – but the book does open satisfactorily. The text is clear and an easy-to-read 12pt in size; but the images, though perfectly adequate, are less than pin-sharp. However, there was one thing that was wrong: the printed Contents list had slipped over onto three pages whereas the PDF I had checked showed it on just two pages. Consequently, all the page numbers quoted in the Contents list were out by 1 page. I immediately used the web site chat facility to report the problem and was told that someone would get back to me.

The following morning, I was asked to provide photos of the problem and to specify the type of device and browser that I was using. I responded saying, “The device I’m using is a Windows 11 laptop with the Firefox browser (version 146.0 – 64-bit)”. I was then told that,

“It seems the issue may be related to the browser you used. Please know that while this is rare, we’re working to ensure all browsers display the correct format. That said, we will take full responsibility and send you your books again at no extra cost. To ensure everything appears perfectly this time and to prevent the same issue from happening again, I encourage you to log in to your account using Google Chrome and make any necessary adjustments. Once you’ve made the changes, please let us know, and we’ll provide a PDF copy for your final review. After you approve it, we will reprint your book and ensure it is sent to you as quickly as possible.”

A subsequent exchange confirmed that it would be worth trying with Microsoft Edge which I duly did; and after comparing with the PDF I was sent, all seemed to be well and the book was sent for reprinting on 21st December. The two replacement copies were delivered to me by Royal Mail on 27Dec – and they did have the correct pagination.

I felt so pleased with the support I had been given that I was prompted to send the following message to the support team: “I must say that the response of you and your colleagues in the Support team has been an outstanding example of prompt and excellent customer service.”. Having said that, though, the problem I encountered should not have happened, and my euphoric response probably also had something to do with the fact that my general experience with online support these days is poor. Furthermore, my subsequent dealings with the support team were less satisfactory and revealing – but more of that at the end of this post.

So, having completed the whole 12-month cycle of My Life in a Book (MLIAB), how do I feel about the experience? Well, it certainly prompted the exploration, re-use and perhaps rethinking of old memories and artefacts; and it’s satisfying to have the results all neatly packaged up and sitting on my bookshelf. The completed book is intended as much for the current family and future generations as it is for me (as is pointed out in much of the MLIAB promotional material), and, as yet, I have no idea what my daughter and her husband think of the artefact they commissioned; nor what my other offspring, who will be the lucky recipients of the copies with incorrect paging, will think. Perhaps they won’t even read it. However, as the author, I do know that I made some specific choices about the content. First, being conscious that the book might well be perused by all members of the family, I was careful to be inclusive and not to favour anyone in particular. Second, I naturally only included material I was happy with other people knowing about. Thirdly, some of the contents are things that the family almost certainly will not have been aware of; and fourthly, after I’d finished, I began to have doubts about some of the material I had included; and inevitably started to think of other things I could have included – but I certainly wasn’t going to take up the service’s offer to extend the process: one year was quite enough answering questions, researching, and editing. Overall, I think it’s a pretty effective way of exploiting one’s collections of mementos, photos, correspondence and other personal material – but it does require work and persistence.

I should mention a couple of other things at this point. One is that the marketing effort by MLIAB is one of the most intensive I have ever experienced: during December I received over 40 general marketing emails unrelated to my account or the book I was producing. The other is that there are several other similar services available on the net (for example, The Story Keepers, Storyworth, Remento, and No Story Lost), but I haven’t investigated any of them.

Now, to return to my further dealings with the Support team. Throughout my exchanges with them I’d been a little bemused by the gushy nature of the responses. It wasn’t normal and smacked of AI (see messages 1-6 in this linked file). This view was cemented in the next message (Message 7) that I received in answer to me asking if they had encountered the problem with the generation of the PDFs and if they knew what the cause was. The response was strangely imbalanced. It ignored the issues associated with PDF generation and instead it explicitly described how large images are placed onto the following page leaving gaps – a fact I was very familiar with – and was gushing about a potential solution I had offered. At that point I replied with the question, “Julia, are you and your colleagues Pauline and Sandra real? How much of your reply below was generated by an AI Large Language Model (LLM)?”.  The reply insisted that they were real people aided by tools such as AI (see Message 8 in the linked file).

Now despite my satisfaction with the way my book had been reprinted, there are some hard facts about customer service to be taken away from these exchanges:

  1. The positive impact of warm, gushy, language just disappears after it becomes obvious it’s machine generated. At the point I confirmed what was happening, I ceased to feel I was dealing with people and became rather hard-nosed and cynical – as will become apparent from my comments below.
  2. The fact that my PDF question had been ignored was very frustrating; but I decided not to follow it up because, in my experience, bots are useless when they are dealing with unfamiliar issues, and the organisations that implement them always seem more intent on saving headcount than addressing customer problems. The issue with PDF generation is a genuine problem that the MLIAB organisation should know about and be able to advise customers about. It’s disappointing that it wasn’t addressed in the response.
  3. Despite Message 8 insisting that all named members of the support team that I had been dealing with were real people, I’m not sure whether to believe it or not. LLMs are notorious for getting things wrong and saying what suits their prediction algorithms; and I think that organisations are all too often happy to obscure the real capabilities of their customer support operations. I may be wrong about the MLIAB support operation, but I’m afraid this is the view I now have after my experiences with a variety of bots and support operations; and after reading quite a bit about contemporary AI systems.
  4. Even if the messages to me were being reviewed by real people, the fact that my question about PDF generation had been so studiously ignored, suggests that either the reviewing wasn’t very good (or there was insufficient bandwidth to undertake a proper scrutiny of my question); or that the AI/people combination had deliberately decided to ignore my question.
  5. My overall attitude towards the MLIAB support operation is now one of ambivalence – despite its excellent response to the incorrect pagination of my book. I really have no idea how many real people they have in their support team, what their real names are, or how they actually operate. Does the AI create all responses immediately messages are received with the replies being quickly reviewed by real people (or even just a single person?); or do the real people look at messages from clients and then enlist the AI to create a response? Knowing that this is the way the world is going, I’ll inevitably have to draw on this experience when I interact with other customer service operations in the future. This will be a self-perpetuating vicious circle until customer service is once again considered important enough to give a sufficient number of human representatives the time to be able to interact in detail with all customers wanting help, support and answers to questions.

Donating Documents to an Archive

Around 1927, a group of people from Yorkshire who were working in Malaya established a social club for themselves. They called it ‘The Society of Yorkshiremen in Malaya’, and it operated successfully until the fall of Singapore to the Japanese in 1942. After the 2nd World War had ended there were still some of the original members living in Singapore and Malaya, and, at a meeting held in 1949, they decided to reconstitute the Society.

My parents were both born in Yorkshire, so when they arrived in Singapore in 1953, they duly joined the Society. From 1959 onwards one or other or both of them were members of the Society’s Committee and acted in various capacities (Secretary, Treasurer or President) until dissolution of the Society in 1970 due to lack of members. The Society’s minutes were initially recorded in a hefty, foolscap notebook with 296 numbered pages; and after the notebook ran out of pages, on loose sheets of paper. They have been in a briefcase belonging to my parents for the last 55 years – until this summer when I decided to try and find them a home.

I decided I would try to create a book to accompany the Minute Book and Loose Papers: a book that would provide some summary information about the Society, and which would also include scans of the loose documents. I hoped that the accompanying book would make the whole package attractive to an archive somewhere in Yorkshire.

The book ended up with the following contents in 8 folios of 16 pages – 128 pages altogether:

Front matter (Preface & Contents)
1. A short history                                                         3
2. Lists of Members 1953 – 1969                                8
3. Lists of Committee Members 1950 – 1969            43
4. Committee Meeting Minutes 1950 – 1969             57

Appendices
A. Correspondence regarding the pre-war Society    92
B. The Benevolent Fund                                          101
C. Scroll given to ex-President Fred Wilson             105
D. 1966 Annual Dinner menu                                  106
E. 1968 Annual Dinner menu                                  113
F. Photos from two Annual Dinners                         120
G. The Society’s Minute Book                                 121

Having finished the text, I researched possible archives in Yorkshire and decided to contact York Libraries and Archives. I was told that decisions about acquiring new material are taken at Collection Meetings held at the end of every month; so, I provided a copy of the first folio of the book for consideration by the meeting and held my breath. On 2nd September I got an email saying the organisation would like to acquire the material.

Meanwhile, I’d been producing the hardcopy book. The first task had been to print each of the 8 folios using Word’s ‘Bookfold’ Page Setup in MS Word. This requires the pages to be printed out in landscape on both sides of, in this case, A3 pages.

Although the Bookfold printing process has been described in previous posts, here’s a recap of what to do. When you press Print in Bookfold mode, the first side of the pages are printed – 2 pages side by side on one A3 page. The pages must then be reordered by moving the page on top to one side, then placing the next page on top of the first page, then placing the next page on top of the second page, and finally the next page on top of the third page. The reordered set of pages are then placed back into the paper tray pointing in the same direction as they came out. Instructing the printer to continue will then produce the four pages printed similarly on the other side. Reordering the pages in the same way as before, and folding the pages in half will magically produce the 16-page set in the correct page number order.

With all 8 folios printed out and folded, the next step was to sew them together; and then to paint PVA glue over the stitching (but not the tapes) to hold the set firm during subsequent steps.

The edges of the text block were then trimmed and squared in a bookbinders plough.

The final work on the text block was to glue on the end papers, the end bands, and a piece of fraynot on the spine with a 2cm overlap on either side and a piece of Kraft paper on top of it. Next, a cover was made using 2mm board with a buckram fabric; and the end papers were glued to the cover to complete the hardback book.

Now that the dimensions of the completed hardback book could be measured, work on the Dust Jacket (DJ) started in PowerPoint. The design included 8.2 cm wide flaps, making the total length of the DJ 62cm. The maximum length of paper that my HP Officejet Pro 7720 A3 printer will deal with is 43.1 cm, so printing this DJ required that the image be split in two and one part to be rotated through 180 degrees. The first part was then printed and the paper was then turned round and fed back through the printer to print the other part at the other end.

Having printed the DJ, it was cut to size and folded accurately around the physical book so the vertical title was central on the spine of the book. Finally, archival plastic was fitted around the DJ with a 5/6cm overlap folded over along the top and bottom edges (the folds hold the plastic in place so no fixative is required).

Although I had committed to giving all the physical materials (the hardback book, the Minute Book and the Loose Papers) to York Libraries & Archives, I wanted to have an electronic version of all the material which we could keep in the family. I already had an electronic version of the hardback book that I had created, and that already included all the Loose Papers; all that was missing were the pages in the Minute Book. So, I took photographs with my iPhone of the Minute Book opened up on each double page (photos taken by the modern mobile phone provide photos of more than sufficient quality for such a job). It was surprisingly simple and quick: I found a coffee table that was just the right height that I could put my iPhone on with the camera end sticking out over the side; and I put the opened Minute Book on the floor in position so that its full extent appeared in the photo image. I held the camera in position with one hand and pressed the photo button with the other. Then, using the photo button hand I turned the Minute book page over and took the next photo. In all there were 169 photos (because, while most pages were written on, some had minutes glued in – some with multiple sheets of paper stuck into them).

Armed with digital versions of all the material, I simply assembled them all into a single PDF, starting with the pages from the Hardback Book and adding the Minute Book images on at the end. Then I added the front of the DJ and a page with both flaps on it to the very front of the file; and a page with the back of the DJ to the very end of the file.

There was now only one task left to do: to make the references in the digital version of the Hardback Book actually link to the pages they were referring to. So, for example, the reference to MB63 in the first para of Chapter 2 needed to be linked to the 170th digital page in the PDF. It was a long job, given that there were over 400 references to deal with. However, with it completed, the whole electronic book provides quick and easy access to all the interlinked material.

I now turned to the practicalities of shipping the physical material to York. I decided to drive to York so that I could take a look at where the collection would be stored and accessed, and I agreed a date of 28th November with the York Archivist. The legalities of the transfer of the material also needed to be dealt with: there was a 4-page Gift Agreement, two copies of which were signed by my mother (the owner of the Minute Book and Loose Pages) and a witness. The agreement essentially passed all rights to “The Council of the City of York (‘the Council’) acting by Explore York Libraries and Archives Mutual Ltd” subject to any specified limitations; we specified no limitations.

On Friday 28th November, I set out for York, arrived at York Park & Ride at around 1.30pm, and was deposited by the bus in York town Centre next to Clifford Tower just after 2pm. A 20-minute walk through the vibrant York shopping centre teaming with Black Friday shoppers, took me to Museum Street where York Archives and Library is located. I met with the Archivist and handed over the books and papers. She signed the two copies of the Gift Agreement and gave one back to me; and then very kindly gave me a short tour of the three main areas (the Archives Reading Room, the Family History Room and the Local History Room). I left to celebrate the completion of my mission with a cup of coffee and excellent bacon sandwich in one of York’s many coffee shops.

A few weeks later I received a thank-you letter from York Archives which included the following:

“Thank you for depositing the records of the Society of Yorkshiremen in Malaya with the archive here at Explore. Your records are a unique part of Yorkshire’s heritage and depositing them with us will ensure that the history of the Society of Yorkshiremen in Malaya is not lost. Your deposit will help us to share these stories with future generations and enable researchers to gain a richer picture of life in York.
Now that your records have been deposited, we will put them through a programme of cataloguing and packaging that will aid online discovery and the preservation of your collection.  Once this process is complete, we will be able to make them available to researchers, subject to any access restrictions.
Your records will form part of the city’s c450 cubic metres of physical collections and our growing digital archive.  Together, these collections document nearly 900 years of York’s history…..”

If you’ve read all this way to the end of the story you may be interested in reading a bit about what the Society of Yorkshiremen in Malaya actually did. So, here are the 5 pages of Chapter 1 which provide a brief history of the Society (note that page number references in this text are preceded by either MB or TV. MB refers to the Society’s Minute Book. TV refers to pages in This Volume – the Hardback Book). I believe that the documents will have been indexed, packaged and made available in the York Archives within approximately 6 months – around the middle of 2026.

Figuring on playing to your Age?

Recently, a couple of people at my golf club have spoken to me about playing a round with a gross score that was equal to or lower than their age. One had just managed it, and the other would have done if one hole hadn’t been closed. Over the years several people have told me about their desire to achieve this feat; and I would certainly like to – though, being realistic, I’m probably not good enough. Anyway, these conversations got me thinking that a measure indicating how close you are to achieving it might be quite easy to calculate and maintain in the automated scoring and handicap systems that we use these days. It could simply be the gross score minus your age. To take account of courses having different par ratings, the score could be multiplied by course-par divided by 72. So, the formula would be: Gross Score x (Course Par/72) – Age. It could be called the RoundAge number (capital A to distinguish it from roundage which apparently is a local tax paid by a ship for the ground or space it occupies while in port). For example, if a 75-year-old got a gross score of 82 on a par 72 course, the RoundAge would be 82(72/72)-75 = 7. If, by some miracle, the golfer had a stellar round of 74 gross the following week, that RoundAge would be 74(72/72)-75 = -1. The RoundAge could be calculated for every card recorded, and averaged over each year to provide a longer-term graphical view of progress.

Preservation Maintenance Plan LITE template

Addenda to ‘Preservation Planning’

In 2021 I published v3.0 of a set of Preservation Planning templates which were designed to enable a rigorous Preservation regime to be applied to large collections of digital documents and their accompanying hardcopy material. However, in my recent investigations into the combination of collections it became apparent that a simpler and quicker approach would be more appropriate for multiple smaller collections with less complex formats. Therefore, a new Preservation Maintenance Plan LITE template has been produced and initially tested on two sets of 10 collections each. Further testing will be done over preservation cycles in the coming years, prior to issuing a version that can be said to be fit for purpose.  In the meantime, the current version is available for use at the link below.

Preservation MAINTENANCE PLAN LITE Template – v1.0, 09Sep2025

A Lite Touch

In the previous post I identified a need to understand the additional digital preservation requirements of the overall combined set of collections. To investigate this, I listed all the individual collections in a spreadsheet and noted some points which have a potentially significant impact on preservation work, including:

  • Does the collection have an index? (if there is no index there is no way to check the inventory – the items themselves define what is in the collection).
  • Does the collection have digital items with or without physical equivalents, and/or physical items with or without digital files? (when an item exists in both digital and physical form, there is more preservation work to do).
  • The number of digital and physical items (there is substantially less preservation work to do on a folder of 30 digital items, than there is on a collection of 500 digital items of which 175 have physical equivalents).
  • Whether there is any duplication with other collections (If a collection is part of a larger set of objects which already has a Preservation Plan, there is no need to specify a separate Preservation Plan for it).

Having populated this Preservation Asessment spreadsheet with its long list of 38 collections that might need Preservation work I was filled with some dismay as I’ve now had several years of implementing Preservation plans on many hundreds, if not thousands, of objects: it’s time consuming and exacting work. I knew that I needed to minimise the time and effort on this new set of preservation activities if it was going to be workable and successful. Furthermore, I also realised that for many of the collections on the list I was not really that concerned about the long term: they were accessible currently – many without needing an index, required little intervention, and might be of little interest many years hence.

With these thoughts in the back of my mind, I went through the list deciding what preservation work, if any, was to be done on each collection. Fortunately, 8 of the collections either already had a Preservation Plan or were part of one of those which had; I discounted another one altogether as it only had one insignificant digital file; and another seven were part of another collection on the list. I also combined 3 of the remaining 22 collections into a single overall Healthcare collection (because there were fewer than 90 files across them all), and 2 of the Book collections into a single overall Physical Books collection (because I knew the two would need to be done together). Finally, I added one other collection to the list – my other general laptop folders which I concluded would also benefit from being under the control of a preservation plan. Consequently, I was left with 20 collections to define Preservation Plans for. This was far too many to be practical, and, in any case, the more I looked at the digital files involved, the more I realised that they mainly consisted of pdf, jpg, png, doc/docx, xls/xlsx,, and ppt/pptx formats – not very problematic. For the most part, an eyeball check would be all that was necessary to identify doc, xls, and ppt files that needed converting to docx, xlsx, and pptx respectively, so the detailed 16-step process required in my comprehensive Preservation Maintenance Plan template would be overkill. I needed to create a LITE version of the Preservation Plan with fewer steps and capable of addressing multiple collections. What I came up with were the following 4 steps:

  • Populate a ‘Changes’ section with the significant changes that have occurred to the collection and its digital platform between the previous maintenance exercise and the maintenance you are about to carry out.
  • Populate a ‘Hardware and operating system strategy’ section with the strategy you envisage for the future.
  • List the collections you want to undertake Preservation activities on in a ‘Contents & Location’ section together with the specific actions you want to take for each one (for example, ‘Check file extensions’ or ‘check inventory’).
  • Record a summary of the actions taken and associated results for each collection, in an ‘Actions taken’ section.

With this structure in mind, I separated the 20 collections into two groups – one which included substantial numbers of physical objects, and one which consisted mainly of digital files. The result was two Lite Preservation Plans each dealing with 10 collections (it’s just coincidence that each have the same number of collections).

The actions specified for each collection were established by assessing what I wanted to protect against for each collection and how much effort I was prepared to make. Six different types of possible actions emerged:

  • Check file formats: Check that the current file formats will enable the files to be accessed in the future, and if not make changes to ensure they will.
  • Check Inventory: Check that the index entries have a corresponding physical item and/or digital file, and rectify any inconsistencies.
  • Ensure physical docs are up to date: Ensure that the physical documents are the latest versions.
  • Ensure Index is up to date: Ensure that the latest additions to the collection are included in the Index.
  • Ensure Digital collection is up to date: Ensure that the latest additions are all included in the digital collection.
  • Ensure Physical collection is up to date: Ensure that the latest additions are all included in the physical collection.

The two Preservation Plans fully populated with the results of the preservation work carried out on them can be accessed at the links below:

Objects Preservation Maintenence Plan Lite dealing with 10 collections

Files Preservation Maintenence Plan Lite dealing with 10 collections

The preservation work, as specified and recorded in both plans, took approximately 20 hours over about a week. This included filling in the Plan documents with the results as each collection was tackled. Overall, the main actions taken were:

 1,976 .doc files converted to .docx: 1,937 of these were converted in bulk using the VBA code kindly provided by ExtendOffice (see https://www.extendoffice.com/documents/word/1196-word-convert-doc-to-docx.html). The remainder were simply opened in Word and saved as .docx files. (a few of these were originally .rtf files).

150 .xls files converted to .xlsx: 141 of these were converted in bulk using another set of VBA code provided by ExtendOffice (see https://www.extendoffice.com/documents/excel/1349-excel-batch-convert-xls-to-xlsx.html), with the remainder being opened in Excel and saved as .xslx files ( a few of these were originally .csv files).

564 files deleted: 464 of these files were in an iTunes folder – and I no longer use iTunes. 36 were CD case covers/spines which I created in an application I no longer have – and the CD covers are all now printed out and in place on the CD cases so I no longer need these files. Most of the remainder were odd files which I no longer have a use for. As is apparent from this description, such files tend to be from folders containing more general material rather than specifically collected and indexed items. Many computers probably have an array of such unneeded material.

Around 9 new items added: 7 of these were added to get a collection up to date, and the others were the two new Lite Preservation Plans which were included in the Backing-up collection.

2 Hardcopies updated: One was a physical A5 ring binder of the addresses in my address database; and the other was my Backing-up and Disaster Recovery document which I print out and keep a copy in my desk drawer. It’s really a bit of an effort to update such documents regularly and so they often get out of date. Having a scheduled Preservation Plan does help to keep them relatively current.

The next cycles of these two Preservation Maintenance Plans are now scheduled for 2027 and 2028 respectively: I can now relax, confident that I have done as much as I wish to future-proof the 20 collections that they deal with.

I have included most of my workings in this post largely to help me be clear of what I did. However, the details are of little consequence to readers interested in undertaking digital preservation work on their collections. They only serve to show that you can call anything a collection, and that you can cut and dice collections any way you want. The key point is that, using this approach, it is feasible to exert a measure of preservation control over a large number of collections, including the files on your computer, with relatively little effort. If you try this out, you may find this Preservation MAINTENANCE PLAN LITE Template helpful.

Published!

Events have moved on apace since my last post three weeks ago. For a start, the publication date moved in stages out to 7th August before coming back in to the 4th August, and the Waterstones web advert which had vanished, reappeared. Then, suddenly, on Saturday 28th June we received an email from the Production Editor saying that the book had been published with information available at https://link.springer.com/book/10.1007/978-3-031-86470-4. We have subsequently received a Congratulatory email from Springer and this together with the website information provides a revealing example of how academic publishing is now operating.

The Congratulatory email includes advice on how to ‘Maximize the impact of your book’ and offers use of ‘a suite of bespoke marketing assets to help you spread the word’. Also included was a link to a PDF version of the published text. The Springer site advises that the ebook (£119.50) was published on 27June, the hardback (£149.99) on 28June, and that the softback will be published on 12July 2026 (price not yet specified). The site also provides a list of the book’s chapters, each of which can be opened to reveal the summary abstract we had been asked to provide, and the full set of references together with any digital links we had included. Each chapter can be purchased separately for £19.95, or one can take out a Springer subscription for £29.99 a month entitling you to download 10 Chapters/articles per month (which, interestingly, would get you pretty much the whole of Collecting in the Icon Age!). Those with appropriate credentials may also be able to login via their institution and get content for free if the institution concerned has come to a separate arrangement with the publisher.

Since hearing that the book has been published, I’ve been working on the supplementary material we are providing in the pwofc website. This includes a single document containing all the references each with an appropriate web link. In searching for such links over the last week I’ve noticed that in several cases, extracts from our book are already appearing in the hit lists. Furthermore, I discovered that previews of many pages of the book (including the whole of chapter 1) are available in Google Books ‘displayed by permission of Springer Nature. Copyright’. All this in less than 7 days since publication.

Two things stand out to me from all this: first, there is a surprisingly large amount of information available for free about the book. It is probably not sufficient if you really are interested in the subject – but you can get a pretty good idea about what the book contains. Second, there is clearly a focused effort to monetise the publication in every possible way.

Now that we’ve achieved publication, I don’t intend to provide any further running commentaries on progress. The material we are providing to supplement the book is in the Icon Age Collecting section of this website, and that is where we intend to conduct any dialogues about the book that should arise.

Plot profile for the movie ‘Eerie AI’

Gronk Pistolbury knew quite a bit about AI. After doing a Phd on ‘Extreme perturbationery and calmic episodes in deeply embedded AI neuron nodes’, he had moved around various high-profile organisations operating LLMs (Large Language Models) in the 2020s and 30s. During those years he had continued to develop his Phd ideas, and, by the mid-2030s, had come to the conclusion that something odd was going on.

His research was based around the analysis of AI hallucinations, and he collected instances of the same from both his own vast bank of automatically generated content, and from whatever other sources reported such an event. His analysis of this material had started to show up similarities and even some duplications across the more recent data sets – and Gronk couldn’t figure out why. He suspected that the hallucinatory material was going back into the internet data pool and affecting the content of the LLM – but he had no real evidence to back up his theory.

In 2038, he had used a large chunk of his savings to take out a three-year subscription to the Jonah Vault – the most extensive and advanced AI Data Centre conglomerate in the world; and to acquire an extremely powerful computing configuration for his own home. His idea was to test out his theory by using the Jonah Bank to produce enormous numbers of AI outputs for analysis by his own specialised system. The analysis would identify hallucinations and map similarities between them – and insert them back into the training data for his own LLM in the Jonah Vault. This was to be done at scale – over a billion instances a month.

By 2041, his research was beginning to show some significant convergences in hallucinatory events; but his Jonah Vault lease had only a few weeks to run and he had no money available to continue to fund his work. It was at this point, however, that Gronk Pistolbury won the Inter-Continental Lottery and pocketed a cool $7.9 billion.

2041 was also the year when Quantum Computing became truly commercially accessible. There had been a few start-ups in the late 30s offering both hardware systems and cloud services. However, it was the arrival of Quiver inc. in 2041, that made Quantum a practical and affordable alternative to conventional digital systems. Gronk took out a $500 million, one-year service contract with Quiver and hired half a dozen of the best quantum/compute engineers he could find, and built a quantum version of his hallucination test bed.

When Gronk set his Quantum operation going, he had hoped that it would significantly speed up the circulatory process of hallucination production and LLM development. However, the system was far more powerful than he had dared hope. It reduced the cycle time by tens of thousands. After 3 months operation it became clear that the LLM was converging on a relatively small number of answers to any question asked of it; and after 6 months it was down to a few hundred characters. Needless to say, the answers now bore no relation to the questions that had been asked. In puzzled awe, Pistolbury and his engineers watched in fascination as the LLM continued to narrow its answers to the questions put to it relentlessly by the Quiver Quantum machine. Finally, after 7 months, 26 days 14 hours, 9 minutes and 4.278 seconds the LLM settled on its final answer to any question about anything – 42.

They had seen it coming but couldn’t quite believe it would happen. It was bewildering, weird, crazy, eerie, but the hallucination machine had said that the answer to any question was 42; and some 63 years earlier, Douglas Adams had said in The Hitch Hiker’s Guide to the Galaxy that the answer to the great question of Life, the Universe and everything was 42. From that answer onwards the hallucination model LLM would give no other answer to any question. It did not reduce the number or change the number or add to it. It stayed, unmoving, at the two characters that a humorous author had just thought up on the spur of the moment in the previous century.

…Should the movie be a success, a possible sequel could follow Pistolbury over the following three decades on an epic quest to understand what had happened, by undertaking a whole variety of way-out experiments producing eerie LLM results. For example, neural node pairing, star refraction hypnosis, and, in all its gory detail, LLM brain fluid crossover.

Note: All of the above is pure fiction. None of the names or dates or scientific claims are real (and some of the science bits don’t even make sense!). Should any of this material find its way into AI answers, it will be because it has been purloined for AI training data; and it would be a graphic example of AIs inability to distinguish reality from fantasy. This little idea for a (really bad) movie plot might even end up playing a supporting role in an AI hallucination… now that would be amusing!

Revised Proofing

Despite me thinking that the proofing process was closed, Springer sent us ‘Revised Proofs’ on Saturday 7th June to check and return by Monday 9th June. This was good news as far as I was concerned as it provided opportunities to both check that the proofing changes we had specified had all been done correctly (and, indeed, I did spot 27 shortcomings); and to specify a further 15 changes which my continuing checks on the references had identified (I might add that the vast majority of all these changes were relatively minor involving changes to only a few words, if that). This time round, we had been asked to specify changes in annotations to a revised PDF, so I used the pdf callout facility to document the change needed in a box with an arrow next to the relevant text. My co-author, Peter, had work priorities over these few days, so the changes – and anything missed – are all down to me.

I duly submitted the annotated proof around 9pm on the night of Monday 9th June; and the next day we received an email from Springer acknowledging receipt of our comments and saying that they would review and incorporate them in accordance with Springer’s guidelines after which they would proceed with the online publication process. I’m not too clear with what ‘the online publication process’ entails; nor do I understand why the publication date continues to move – as at the date of this post in Springer’s web site it currently stands at 26th July. However, I do think that the proofing process is now truly complete. In an interesting development, Waterstones appears to have pulled its web page advertising the book, and I wonder if that is because of they have grown impatient with the continual movement of the publication date. Beck-Shop and Amazon, however, are still offering the title.

What bonuses (and companies) are for

I believe most large organisations these days have a mission statement; and the ones I’ve seen usually include words about providing excellent products and customer service. However, my own experience in recent years seems to suggest that many large organisations are now just dedicated to growing their businesses and making more money – despite what they say in their mission statements. Products just seem to get smaller (for example shower gel in a different but smaller bottle) or worse (tins of baked beans with sausages that now taste completely different and not as nice), and customer service is mostly abysmal (for example, long phone wait times, and bots instead of people). Furthermore, Chief Executive bonuses often seem to be tied to how much money is made. I wonder if any organisations tie their CEO’s bonus schemes to all the elements of the organisation’s mission statement. Would it make a difference if all organisations did that as a matter of course?

Some Combination Consequences

A few days ago, I completed the Preservation Maintenance exercise for the PAW-PERS and SUPAUL-PERS collections. Actually, these two collections no longer exist separately – they were merged together into a new Mementos collection in last years Combining Collections journey. During the Preservation work, I encountered a few issues directly related to the increased scope of the Mementos collection, and to the way I combined all my collections. They are listed in the bullets below and subsequently described in more detail:

  • File pathnames exceed system limits
  • Varied ways of filling in fields
  • Preservation Maintenance is a bigger job
  • More Preservation Maintenance work is required
  • Backing up becomes more complicated

File pathnames exceed system limits:  MS Windows limits pathnames to 256 characters unless you make a change to the Registry. When I combined collections, I deliberately included the contents of a folder in the folder title to make navigation easier, for example, ‘Documents/PAWCOL/Family History (Archive, Mementos, Display Case Items, Photos, Recordings, Story Boards, Trophies)’. This resulted in very long path names when combined with file names with a lot of detail about their contents (for example, ‘MW-BKS-0001-02 – 4 smaller books – The Rubryat of Omar Kyam, The language of flowers, A preliminary course of First Aid, and a midget English dictionary’. The titles of files which exceeded the 256 limit still remained visible, but there were two undesirable impacts: the file wouldn’t open in my PDF app and seemed to cause the app to stop opening other PDF files as well. Secondly, the ‘Copy as path’ function which I was using to compare the file titles with the index entries, wouldn’t produce the correct file name, for example, the MW-BKS-0001-02 file shown above came out as  ‘”C:\Users\pwils\Documents\APAWCOL\FAMILY~1\Mementos\MEMENT~3\MW-BKS~2.JPG”. I decided not to go with the registry change to rectify this as I’m not sure how it would affect the PDF app, and, in any case, I’m not familiar with messing about with the Registry. My priority is to get the PDF app working again properly and permanently. Consequently, I have started to take out inessential information from the relevant file titles to have them come in under the 256 limit.

Varied ways of filling in fields: The Mementos collection has combined 5 different collections – all of  which had different ways of providing information in the ‘Physical Location’ field. Consequently, the Excel Filter drop-down list of different physical locations was very large and varied. So, I imposed a standard whereas all physical locations started with terms like Study, Chest, and Loft; and with a standard form of subsequent words. This is an obvious point, but when you combine several collections into a single index a degree of normalisation work is inevitably necessary.

Preservation Maintenance is a bigger job: when my two collections PAW-PERS and SUPAUL-PERS were separate collections with separate indexes, I had conducted Preservation Maintenance on them separately in previous years and had separate Preservation Maintenance Plans for 2025 for each of them. They contained about 800 and 750 items respectively. However, the new Mementos index/collection now not only contains their 1550 items but also about 550 items in the CONTRAB collection and another 220 items in the Computer Artefacts collection – a new total of about 2320 items. Furthermore, the physical items in each of these four main elements are all stored separately in different locations and in different ways. Inevitably this vastly increased number of diverse items has meant that the Preservation Maintenance exercise for the new Mementos collection took a great deal longer than the previous separate exercises, and was a good deal more complicated. This makes a difference because Preservation Maintenance seems like an overhead task, and the bigger and more complicated it is, the less motivated the owner may become to undertake it. It seems there may be trade-off between combining indexes to make them easier to manage and access, and making the Preservation Maintenance easy enough to be carried out regularly and reliably.

More Preservation Maintenance work is required: Before combining collections, I was only undertaking Preservation Maintenance work on four collections all of which have indexes – PAWDOC documents, Photos, and two separate sets of Mementos. Having combined all my collections, I now have some 40 collections which potentially need Preservation Planning work – many of which have no index. This is a potentially huge increase in work – though, at this point, I don’t really know what is required and whether it is best to deal with all these additional collections together or in smaller separate groups. One key criteria to be considered will be which Preservation arrangement has the greater chance of actually being enacted and not just simply put on one side as being too difficult or time-consuming. I will have to investigate the implications and will document my findings in a subsequent post.

Backing up becomes more complicated: As documented in earlier posts, in combining collections I have made considerable use of shortcuts. For example, within the ‘Entertainment Recordings (Movies, Music, Spoken Word)’ section there are shortcuts to the Windows Videos library, the  Windows Music Library, and to the Spoken Word folder within the Windows Music library. So, just copying the contents of the ‘Entertainment Recordings (Movies, Music, Spoken Word)’ folder will not provide an adequate backup. Care will need to be taken in specifying and carrying out backups to ensure that copies of the appropriate material are actually taken.