Artificial Intelligence (AI) is a term used to signify intelligent behaviour by machines – which really just means them doing more complex things than they have done before. Two significant milestones in the development of AI were when IBM’s Deep Thought programme beat Garry Kasparov, a reigning world chess champion, in 1997; and in 2016 when Google’s DeepMind AlphaGo programme beat a professional Go player. Current prominent AI work includes the development of driverless cars and trucks, and improving the ability of AI programmes to learn for themselves. It is generally thought that AI capabilities will continue to be developed for some time in just narrow areas of application, before eventually broadening their scope to become more general-purpose intelligent entities. Assuming this development trajectory, we can speculate that the way we deal with our digital objects and collections might be impacted by AI in the following series of steps, each one taking greater advantage of an increasingly capable technology:
A. AI to collect virtual objects at our specific request: The Facebook ‘on this day’ function that we can choose to turn on or off, is a good example of this in use in a contemporary system. In future systems we might imagine that we have an AI which is independent of any one system but which we could ask to collect specific objects across the systems we specify, for example, ‘collect all photos that we look at in our email, in Facebook and on Instagram’.
B. AI to collect digital objects at our general instruction: This is similar to step A except that we won’t have to specify the systems we want it to monitor. We‘ll just provide a blanket instruction such as ‘collect everything to do with any shopping I do’, or ‘collect all photos I look at’, and the AI will address the request across all the systems we use. At this stage the AI should also be taking care of all our backup requirements.
C. AI to understand what it sees in the digital objects: If we have asked the AI to collect objects for us, in this step it will be capable of fully understanding the content of the objects, and of having a conversation about what they are and the connections between them. At this point there will be no need for indexes to digital collections since the AI will know everything about the objects anyway; it will be able to sort and organise digital files and to retrieve anything we ask it for. The AI will also be handling all our digital preservation issues – it will just do any conversions that are necessary in the background to ensure that files are always readable.
D. AI to exploit our digital objects for us at our request: Now that the AI has control of all our objects and understands what they are, we may just be able to say things like, ‘assemble a book of photos of the whole of our family line and include whatever text you can find about each family member and have three copies printed and sent to me’.
E. Eventually we leave it all to AI and do nothing with digital objects ourselves: By this stage the AI will know what we like and don’t like and will be doing all our collecting and exploiting for us. We’ll just become consumers demanding general services and either complimenting or criticising the AI on what it does.
The last stage above reflects one of the possible futures described by Yuval Harari in his book ‘Homo Deus’ in which AI comes to know us better than we do ourselves, since it will fully understand the absolute state of the knowledge we have and be able to discount temporary influences such as having a bad day or some slanted political advertising. This clearly represents a rather extreme possible situation many decades hence; nevertheless, given what we know has happened to date, we would be foolish to discount either the rate or the content of possible development. However, we should also remain absolutely clear that it will be us, as individuals, that are deciding whether or not to take up each of the steps described above.
Throughout this period of the rise of AI, we will still be dealing with our physical world and our physical objects. AI may be able to see the physical world through lenses (it’s eyes), and be able to understand what it is seeing, and we may well get the AI to help us manage our physical objects in various ways. However, it won’t be able to physically manipulate our objects unless we introduce AI-imbued machines (robots for want of a better word). This too is a distinct possibility – especially since we are used to having machines in our houses (we’ve already made a start with robot vacuum cleaners and lawn mowers). However, having tried to think through the various stages that we would go through with using robots, I came to a bit of a brick wall. I found it very hard to envisage robots rooting round our cupboards, putting papers into folders, and climbing into the loft. It just seems unrealistic unless it was a fully fledged, super-intelligent, human-type robot – and that in itself brings with it all sorts of other practical and ethical questions which I’m not equipped to even speculate about. Perhaps all that can be said with any certainty about such a future of AI software and robots, is that humans will take advantage of whatever technology is on offer provided it suits them and they can afford it.