Back in the late 1990’s, a small group of people realised that storing the year in a two digit format might cause some problems at midnight 31st December 1999. As the date flipped over from 31-12-99 to the very odd looking 01-01-00, would critical IT systems fail? After all, the date has never gone ‘backwards’. The ‘Y2K bug’ was born.
I remember the period vividly. While there was deep concern amongst the few, the many largely took little notice. After all, it was ages away and there were more pressing problems. However, there was a moment when the realisation of this hard deadline came into sharp focus, particularly in the boardroom, and amongst senior leaders in government. There was at one point a serious plan to ground all aircraft globally for 24hrs to avoid planes falling out of the sky, or being unable to land safely. Maybe GPS systems, or the UK power grid would fail. Banking systems were largely written in COBOL, storing the year in two digit format. Was our money safe?
Large IT budgets were rapidly made available. Live systems were replaced, updated or at the very least replicated and extensively tested. This was a project with a very hard deadline, and such projects are always lucrative for IT suppliers.
Then Jan 1st 2000 came, and nothing particularly bad happened. Today, this event is cited as an example of classic existential panic. However, I know many Y2K problems were found and many systems were fixed in advance. I believe there would certainly have been some nasty consequences had the many ‘Y2K projects’ not been undertaken, though of course the risks were never existential.
Today, in February 2024, how is this historical episode in IT history relevant to AI? For several years, a relatively small subset of humanity has been very concerned about the threats from AI, and has been working hard on regulation. At the same time, we have seen a huge expansion in the development, deployment and acceptance of AI. We have smart speakers, smart phones, Google and Bing, both recently augmented with generative AI capabilities. AI is today used to sort CV’s, make movies, write essays, and software developers regularly use AI to write code. More worryingly perhaps, AI is used for face identification and tracking, financial product decision making and even supports legal decision making.
Most developers, deployers and users of AI are unaware that hard regulation is about to be applied to AI. The text of the European Union (EU) AI Act has been agreed, and will soon become EU Law, possibly as soon as Q2 this year. Prohibitions will begin to be enforced after six months of that date, and phased in over the following 18 months. By the summer of 2026, AI will be strictly regulated in the EU. These regulations protect EU citizens (resident in the EU) wherever in the world the actual AI providers are based. AI providers located outside the EU will be bound by the requirements of the AI Act when interacting with EU citizens. Enforcement will ultimately take place via the courts, with harsh penalties specified by the EU’s AI Liability Directive.
A moment is coming, and will soon come, where private sector boardrooms, and public sector senior executive offices will wake up to the reality of AI regulation. There will be a scramble for compliance (or conformity as the EU like to describe it). There will be large budgets, possible existential risks for some organisations, and ultimately a hard deadline to meet. Whilst unlike Y2K, nothing will immediately happen as the deadline passes, you can be sure that the EU will take this regulation sufficiently seriously that there will be early casualties in the war on bad AI.
Very soon now, shareholders and investors, chief executives and union bosses alike will demand that advance action is taken to mitigate the potential risks of non-conformity. The Y2K moment did not happen on 31st December 1999. By then, nothing bad was going to happen, and nothing much did happen. No, the Y2K moment happened maybe 18 months earlier. That moment is about to happen again with AI.