AI Threats, Artificial intelligence, Banking, Cyber Security

Artificial Intelligence Threats to Banks

Artificial Intelligence is the theory and development of computer systems that perform tasks that usually require human intelligence such as hearing, speaking, understanding and planning. In AI, algorithms enrich machines with cognitive functions to enable them to perceive their environment and turn inputs into actions. Recently, its application has highly increased to the extent of completely performing tasks that were previously done by a human being. One of these applications is in banking operations. Here, artificial intelligence is being used in automated tasks such as in ATM. Deep learning and machine learning are applications of AI that are also used by banks in data analysis and security. The recent advancement has really transformed the banking sector but also puts the banks into risk. This article focuses on some of the risks that are brought about by the introduction of AI techniques in the banking sector.

AI’s infancy

One of the significant threats posed by AI to banking is the fact that it is still in its infancy.it is difficult to predict just how clever or efficient it will be in the near future.an analogy to the way AI system learn is the way a child learns. The same way one cannot leave a child without supervision so is AI systems, they cannot be left without supervision. Despite the learning potential of AI, it is still in its infant stages and requires attention. For banks, this type of uncertainty could be catastrophic if not handled with care and caution. Notably, most of the AI tools being used in banks are new and there are no experiences to compare them with and probably predict any outcomes. Thus, AI possesses a potential error rate due to its technological infancy. A simple programming error could be catastrophic in the financial sector. Important to remember that the automation quality controls and security controls were predicted to be future threats.

Data Security

Banks are entrusted ensuring client information in a secure environment with proper data protection controls. Payment Services Directive (PSD2) and General Data Protection Regulation (GDPR) built on the principle that individuals own their data and should, therefore, be able to choose how it is used and with whom it is shared. Under data protection regulations, banks are the data controllers of their customers’ information and responsible for how that information is used. While AI can help to improve efficiency and enabling bank operations to grasp. conclude and bring value in a limited timeframe, at the same time it can potentially expose confidential data in client output with data that was classified as non-confidential mistakenly or data the was changed based on unmanaged newly created decision-making controls.

Narrow Focus

While Artificial intelligence tries to fasten and make working in banks more efficient, it still lacks emotional and empathy intelligence which is a significant key in customer service. AI applies its set rules and even learn and adjust, however, yet its solutions towards some issues in the back will not be compared to human judgment. Human employees are then required to ensure that there is smooth going in the customer service. Software companies have fought to develop systems that can handle customers while considering their emotions. An example is IBM Watson Tone Analyzer focuses on studying customer conversation to design a strategy for a robotic assistant that can be used to provide efficient customer service. While humans may use common sense and past experiences to course correct, AI tools do not have those capabilities natively. For customer service purposes, this could be a major setback because the application will deal with people, in the same manner, it is programmed to work which might not always be a good thing. Different situations require different approaches to get the intended outcome, a luxury that AI tools do not possess.

Opaque Algorithms

There is the issue of explaining to regulators how AI systems are working and how they give decisions. With deep learning and neural networks, a lot of these models are trained with historical data, and it’s challenging to understand why they are giving the results they deliver and trying to explain that to regulators is quite a challenge that most financial institutions face. A study found that fintechs used AI models to underwrite loans, charged minority borrowers higher interest rates. Such incidences could require human regulatory scrutiny in the future to avoid exploitation or biased decisions. The fact that it is difficult to explain how the machines work, raises the concern of whether people can control or prevent biased decisions in the future.

Responsibility Risk

Adoption of AI poses some challenges in liability. It’s not so clear, in case of some error and losses, as to who will be responsible for these losses by the intelligent agent. Take an example of the traditional methods of approving loans where the bank manager would be accountable for rejecting a loan. The AI systems are different and can deny a credit without giving explicit clear feedback as to why the loan was rejected. In some circumstances, the reason for rejection may be sent back, but it is still difficult to explain why the customers were placed in a given failure bracket. While using AI tools might be difficult to determine whether the application malfunctioned to cause an error or whoever was handling interfered with the data.

Inherent Bias

Artificial intelligence is prone to an inherent bias whereby the models sometimes are skewed towards a particular outcome. For example in loan applications, banks have to be careful when using credit scoring that they are not segmenting the client base in such a way that it is disadvantageous to satisfied clients at specific data points.

Integration, Usability, and Interoperability with Other Systems

AI mechanisms may not work well with existing systems potentially causing banks to scrub current systems and replace them with a compatible system. Additionally, if the AI tools integrate well with some units of the system and fail with others, their intended efficiency will be compromised, or there is a possibility that the system may integrate well, but some units may fail to combine well.

Public Perception

Public perception is another broader concern. Banks are subject to this risk if they don’t go into projects with their eyes wide open and with complete transparency. The results of this are reputation risk that banks could be seen to be using the data in the wrong way, and the sort of public outcry in regards to the models created the financial institutions.

Elimination of Jobs

AI is changing employee’s tasks and routines resulting in completely different job requirements or displacing them entirely. According to a research survey, only one out of four banking executives believe that employees are ready to work with AI. Employees fear that automated technologies will eventually automate their jobs. Such software is gradually replacing bank tellers, customer service representatives, telemarketers, stock and traders. Banks should plan to earn employees’ trust in new AI solutions or they may just be underutilized putting ROI at risk.

Customers Trust and Ethics

Using AI applications eliminates the capability of the customers to identify the procedure through which various operations follow. This crucial element of transparency is denied from the face of the customers. They are therefore forced to accept the outcome of the magical interface while they never understand the manner and rules of operations. Using AI technologies may involve automating the whole process of how a particular task is carried out. This means that the customers do not get to access how a procedure is carried out systematically as it was before AI took over. Without that transparency, the customers may lose trust with the bank and the AI technologies.

Fake AI Voice Agents

Some of the banks are applying voice recognition in securing customers’ accounts. These techniques include “My voice is my password” used by some financial institutions such as Safaricom in Kenya on their Mpesa sector. The company prompts customers to produce the voice they registered with during registration. However, since the statement is the same for every customer, this poses a risk since there are possibilities of generating the voices using a similar technic. One more application on voice recognition is in the Bank of America “Erica” which offers AI- voice assistance to their customers. The use of voice as a biometric to confirm identity is faced with numerous risks of fraud. An emerging “deep fake” technology can be used to generate the exact voice of a person hence making it possible for fraudsters to impersonate customers and defraud banks. All that the fraudsters require is a small recording of the customer’s voice, using AI tools one can create an audio or audiovisual content of anyone. The question financial institutions need to ask while adopting the voice verification technique is, what happens when cyber-thieves use this deep fake technology to turn digital voice assistant into an accomplice in account takeover attacks?

Errors

Errors in AI applications have previously lead to significant errors in the banking sector. Like any other software, AI is subject to human errors that result from the initial manual programming. AI algorithms are under the influence of millions of people who interact with them. If banks are not careful, people may use these algorithms toward unintended outcomes. Unforeseen threats like bugs in the AI tools can cause malfunctioning of the application resulting in errors. Knight Capital bank lost millions of money due to an error caused by a bug that was planted in one of its AI applications in the cashier sector. No matter how efficient machines are, they lack the human intuition and they only process the input given to them. This means that when fed the wrong information the whole result will be wrong and the machine might not detect it because it only deals with what it was given to process.

Input Data Complexity

Computer vision is a newly introduced technique in banking that is being applied in signature and character recognition when initiating or closing transactions in banks. Fraud detection employs data mining to identify any unique characters and symbols that are defined as a threat to the operation of the banking system. Optical Character Recognition system (OCR) such as ABBYY Flexi Capture in the UK has been used in cheque fraud detection. The system has to some extent worked for the banks with considerable accuracy. However, due to the existence of multiple languages, these systems are prone to misinterpretation of various characters that could be embedded on different plates or digital forms during authentication. This lowers the accuracy of this software. Low efficiency is a venerability that can be exploited by hackers to provide fake cheques that can be used to withdraw funds in the banks. It is estimated that in the year 2018, more than 500,000 UK citizens lost $1200 on average through frauds on cheques.

Right Data Unavailability

According to Rishi Aurora, Managing Director for Financial Services, Accenture in India, one fundamental problem with facing the AI systems is the unavailability of the right data. Techniques such as machine learning highly rely on the provision of the correct data. Take for example an agent that learns on from the behavior of the customer using credit cards. Whenever wrong details are submitted to the systems, it leads to the generation of the false inferences on the system knowledge base. Continuous application of the erroneous data will result in a system that works differently from what it was intended if trained inefficiently, the applications will not be used the right way a factor that would result in wrong results and wrong decisions that could lead losses or loss of customers due to a breach of trust based on biasness.

CONCLUSION

From the discussion, it is evident that there are many threats and risks of Artificial Intelligence in the banking sector ranging from its difficulty in interpretation, bias nature, narrow focus, usability issue with other systems among others which require banks to put them into consideration if they decide to incorporate Artificial intelligence in their systems.

REFERENCES

The Impacts and Challenges of Artificial Intelligence in Finance. https://internationalbanker.com/finance/the-impacts-and-challenges-of-artificial-intelligence-in-finance/

Risks and limitations of artificial intelligence in business. https://www.nibusinessinfo.co.uk/content/risks-and-limitations-artificial-intelligence-business

Opinion | The Real Threat of Artificial Intelligence. https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html

Barclays on the four biggest risks AI poses to banking. https://www.computerworlduk.com/data/barclays-on-four-biggest-risks-ai-poses-banking-3674108/

Opportunities and Risks of Artificial Intelligence in the Financial Services Industry. https://www.pwc.ch/en/insights/fs/opportunities-and-risks-of-artificial-intelligence-in-the-financial-services-industry.html

AI an Opportunity or Threat for the Finance Sector? https://siliconvalley.center/blog/how-artificial-intelligence-impacts-the-finance-industry-opportunity-or-threat/amp/

Tagged , , , ,