TCE Exclusive: Vimal Mani On Generative AI Challenges In Cybersecurity
In this exclusive interview with The Cyber Express, Vimal discusses AI's transformative role in banking cybersecurity, covering threat detection, GRC standards, and implementation challenges.
Avantika August 2, 2024
Share on LinkedInShare on Twitter
In the rapidly evolving world of cybersecurity, staying ahead of emerging threats and leveraging cutting-edge technologies is paramount. Vimal Mani, the Head of Information Security, Data Privacy & Protection, and IT GRC Programs (CISO/DPO/CPO) at one of the leading commercial banks in the UAE, exemplifies leadership and innovation in this critical field.
With a wealth of experience in implementing robust cybersecurity frameworks and integrating advanced technologies, Vimal has been at the forefront of utilizing Artificial Intelligence (AI) to enhance the bank’s security posture.
In this exclusive interview with The Cyber Express, Vimal shares his insights on the transformative role of AI in cybersecurity, particularly within the banking sector. He discusses the effective deployment of AI-driven technologies for threat detection and response, the essential AI Governance, Risk, and Compliance (GRC) standards, and the significant challenges faced while incorporating AI into cybersecurity practices.
Furthermore, Vimal delves into the ethical considerations, the balance between data privacy and security, and the future trends in AI that are set to impact the industry. His strategic approach and practical solutions offer a comprehensive understanding of how AI can be harnessed to create a secure banking environment.
TCE: How do you see the role of AI evolving in cybersecurity, particularly in the banking sector?
The global banking sector is in transitional stage now which will be analytically and ultimately AI-fueled. The global banking sector has started using AI driven technologies such as advanced analytics, cognitive analytics in enhancing cyber security risk management, operational efficiency and in providing wealth management advice for their HNI clients.
TCE: What specific AI-driven technologies have you found most effective in enhancing threat detection and response?
Operational Cyber Threat Intelligence (CTI) is a relatively new AI driven technology helps organizations in effective cyber threat detection and prevention. Also, AI Technologies support digital forensic teams in digital data analysis, pattern recognition kind of complex cyber security engineering activities.
TCE: Can you elaborate on the AI Governance, Risk, and Compliance (GRC) standards that you consider essential for integrating AI into your cybersecurity framework? How do you ensure adherence to these standards?
AI like any other new age technologies, can be used for both good and bad purposes. Bias and prejudices in AI algorithms will add the woe. As more and more adoption of AI Technologies start happening, data privacy will become more and more of a concern for people and companies adopting the AI Technologies. In addition to this significant amount of legal issues are also possible.
There are no specific GRC Frameworks in place now for regulating AI Technology driven systems which are being addressed through other GRC frameworks such as data or consumer protection. Now gradually global countries and developed nations such as U.S, China have started putting in new GRC Acts and regulations to ensure that AI Technology usage triggered risks are identified and mitigated appropriately.
TCE: What are some of the most significant challenges you have faced in incorporating AI into your cybersecurity practices? How have you mitigated these challenges to ensure successful implementation?
Challenges such as Algorithmic Bias and Fairness Concerns, Explainability Issues, Interpretability Issues, Accountability and Ethical Issues, Lack of transparency, False positives and false negatives in threat detection are some of the critical challenges faced while trying to incorporate AI technologies into our Cyber Defence mechanism.
To address these challenges the following techniques are being used as appropriate and as feasible:
1) Usage of diverse and representative training data to avoid biases
2) Usage of Fairness-aware model architectures and optimization
3) Continuous monitoring and auditing of bias
4) Developing inherently interpretable AI models (GenAI)
5) Usage of Post-hoc explanations and visualizations
6) Usage of Human-in-the-loop approaches for interpretability such as Interactive machine learning, Collaborative decision-making etc
7) Defining roles and responsibilities for GenAI Models deployment
TCE: In your experience, how does AI contribute to balancing the stringent requirements of data privacy and cybersecurity within the banking industry? Can you provide specific examples of this balance?
Using AI Technologies in cybersecurity will help in shaping up the contemporary cybersecurity practices and policies though there are concerns of biases. AI Technology driven information systems can help the individuals in protecting their sensitive data activities from hacking attempts targeted on them. However, the following interventions need to be considered for striking a balance between the risks and rewards potential due to the usage of AI Technology driven systems:
- Building adequate amount of awareness on risks & rewards of using AI Technologies
- Developing robust GRC standards and supporting guidelines for the development and deployment of AI Technology in cyber security practice.
- Promoting participation of the people in improving and shaping up the future of AI-powered cybersecurity through various innovative interventions
- Continuously monitoring the impacts of deployment of AI Technologies in cybersecurity and ensuring that these AI technologies are fully aligned with the business objectives
TCE: How do you approach the task of regularly evaluating and auditing AI systems to ensure they remain impartial, transparent, and effective in threat detection and response?
Carrying out periodic audits to assess the performance, fairness of the AI Models deployed and challenging the decisions arrived with the help of these AI Models will help in shaping up the reliability and performance of these AI Models to come up with more precise accurate predictions and be impartial, transparent and effective in threat detection. Conducting bias and fairness audits such as disparate impact analysis, sensitivity analysis, and ethical matrix analysis could be useful in this.
TCE: What future trends in AI do you anticipate will impact cybersecurity in the banking sector the most? How are you preparing to integrate these emerging technologies into your current cybersecurity strategy?
I foresee the following as upcoming AI Driven Cybersecurity Trends in global Banking sector:
- Real Time Fraud Detection
- AI driven Endpoint Security
- Predictive Cyber Security Analytics
- Chatbots Security
- Automated Incident Response (EDR/MDR)
- Behavioral Biometrics
In ongoing basis, we keep revisiting the existing Cyber Security Architecture of our Bank by doing Proof of Concept Studies of various AI Driven Cyber Security Technologies.
TCE: Can you share specific benefits and use cases you’ve observed from implementing AI in your cybersecurity practices, particularly tasks or processes where AI has proven more effective? Additionally, could you provide an example of an incident where AI significantly improved threat detection and mitigation at your bank, and what were the key takeaways from that experience?
The following are the successful AI enabled Cyber Security use cases i can talk about which has the potential of improving the Cyber Resilience of Banking sector:
- AI enabled Security Operations Center
- AI enabled Expert Systems for enabling cyber security decisions
- Deployment of an intelligent agent which is an independent entity which recognizes adversary movement through sensors and follows up
- Deployment of AI enabled security expert system which will follow a set of predefined AI algorithms to battle cyber-attacks
- Usage of Neural Networks which are known as deep learning AI algorithms
TCE: How do you ensure that your cybersecurity team is adequately trained and prepared to work with AI technologies? What steps do you take to address the human element in this integration?
We keep conducting focused security awareness & training programs for our teams with the objective of acquiring the complete insights of the latest usage of AI Technologies in Cyber Security. Also, we keep sending our team members to Technology Fairs and exhibitions where solution vendors demonstrate the new AI driven cyber security solutions that can be used by Banking sector.
TCE: What ethical considerations are crucial when implementing AI in cybersecurity, particularly regarding data privacy and protection? How do you ensure that your AI systems uphold these ethical standards?
We are mindful of the legal, ethical complications of deploying AI Models and the security & privacy risks it can trigger into our Cyber security operations. We are managing the same with the support of the AI GRC guidelines available for Banking sector and other AI GRC Best Practices. This will include periodic audits of these AI driven Cyber Security Technologies from legal, ethical perspectives.
TCE: For CISOs and DPOs in the banking sector looking to integrate AI into their cybersecurity frameworks, what key factors and best practices would you recommend to ensure a smooth and effective transition?
I recommend the following for budding and veteran CISOs and DPOs:
- Understanding contemporary AI regulations, acts of developed nations
- Investing in research on AI driven Cyber security operations
- Understanding the attack surface and prioritizing AI driven mitigation strategies
- Understanding how cybercriminals are using AI in designing their TTPs (Tools/Techniques/Processes)
- Implementing Automated and augmented incident response driven by AI
- Identifying and mitigating third party risks potential in AI Apps used
- Continuous Training & learning around AI driven Cyber security Technologies