We all have bad days at work occasionally, but for one IT employee in India, this could well be the worst day of their career. A technology website has claimed that the computer error which sent RBS into a tailspin, created a week of hell for thousands of customers and will end up costing millions of pounds to rectify, was down to one individual based in India.
So what happened, and could it happen elsewhere?
It's worth stating up-front that at the moment there is no confirmation of this story, and beyond insisting this is not an overseas issue, the bank has not commented on this.
The rumourThe rumours started as soon as the problems emerged. As part of cutbacks, 30,000 back office jobs in the RBS group were made redundant - including a number of IT roles - and some of the work was outsourced to India. It saved the company a small fortune, as pay in India is up to a fifth lower than the UK. Earlier this year, the bank was advertising 'urgent' positions in Hyderabad paying £41,000 less a year than the equivalent in the UK.
DenialThere were those who started asking questions immediately as to whether the glitch that caused the chaos originated in India. Stephen Hester, chief executive of RBS, was quick to deny this on Sky News, stating that there was 'no evidence' that this was the case. He said: "The IT centre in Edinburgh is our main centre, it is nothing to do with overseas. Things go wrong. Things go wrong in technology."
The sourceHowever, a source who had worked for RBS told The Register website that the problem had been caused by "an inexperienced operative" in India.
The website is claiming that the source of the problem was when a software upgrade (developed in the UK) froze part of the banks' systems on Tuesday. This isn't terribly unusual, and as Hester says 'things go wrong in technology'. The usual procedure is to back out of the upgrade, run the system on the old software, fix the problem with the new software, and give it another go. When this happens you might get a temporary issue, but not one that affects terribly many customers because they tend to do it on a quiet night.
The website's source says what happened in this instance was that in the process of backing out of the upgrade an 'inexperienced operative' made a huge error, and accidentally erased all the jobs waiting in the queue. In this instance these 'jobs' were the transactions set to go through and show up in people's accounts overnight. The information then had to be re-entered into the computer system manually and re-run, creating a massive backlog - because all transactions have to be done in order, so they all had to wait for the data to be re-entered.
David Fleming, national officer of the banking union Unite, said: "Serious questions must be asked as to why constant job cuts are being made when there are clearly serious issues which need addressing by management."
Sign of things to come?The question is whether this could happen elsewhere. In essence this isn't impossible, because all the banks use similar software to process transactions. It will therefore be a question of policies and procedures of who is in charge up running upgrades and backing out of them, and how they are overseen.
In the worst case scenario this is a foretaste of what is to come, as all the banks attempt to keep pace with technological developments at as low a cost as possible. In the best case, this is the wake-up call they need to ensure their systems are up to the task in future.