Bringing best of breed expertise in Data Science and Cyber Security, Beijaflore help you thrive AI initiative

  • Security coaching of your AI projects : risk assessment, requirements, audit;
  • Security Projects for your machine learning infrastructures and data;
  • Security use cases with AI : identify AI opportunities to augment Cyber Security, build tailor made prototypes on your data, asses/POC/deploy AI powered security products

NEW DIGITAL RISKS RAISED BY AI

With the fast adoption of Machine Learning, you shall monitor new vulnerabilities and assess their business digital risks.

Understand the culture of Data Science
AI’s maturity is increasing fast thanks to research scientists from private sector and universities (working together) who are publishing a lot. It is common to find a copy of the AI model or some training dataset of prototypes in their papers. It is also common to store dataset on data scientists flawed laptops or on a start-up file server or public cloud repository. Your security arsenal shall focus on anonymization of dataset for research and prototyping.

AI community is also releasing its tools under open source licenses. Your company cannot afford the cost and delay to design its own models and tools. Relying on open source solutions demands an update of your security processes, tools and team.

Users of mighty AI based applications are only human. Overtime, they will tend to rely blindly on the AI output to make decisions or authorize automated action. They will forget Artificial Intelligence model are approximation by nature with a biais. User awareness is a great challenge overtime with unexplainable AI.

Finally, with the increasing use of APIs, new unethical or non-compliant uses of AI output are possible.
A new cultural challenge for the CISO with deep consequences on its security model.

Assess the cyber security stakes of AI models
Algorithms have not changed much for decades. Access to research papers on data science is not difficult. Nevertheless, in some cases (e.g. phishing filters, market forecasts) you may need to protect your model or training data from stealing to keep your competitive advantage (for the business or your security).

AI models are still unexplainable. System certification is thus impossible. Yet, integrity of the model is its main stake. External threats are targeting the model to deviate from its original objective.

  1. Maliciously modifying some parameters during a model update in order to introduce a bias
  2. Tampering parameters of the model on the code repository

Secure every data
The high volume and variability of data in AI environment raise a concern on confidentiality. Some piece of data may be regulated (eg GDPR) or confidential. The protection of data lake against a data breach is difficult.

Moreover, Machine Learning reliability highly depends on the dataset integrity and trust, especially during the learning phase. Data fed to the model may also slightly change overtime, introducing deviations to its output.

You shall enforce dataset check before consumption to ensure they are free of poisoned data (training phase) or adversarial inputs (live phase).

Data acquisition is definitely a new challenge. AI also acquires data from external IoT, Industrial IoTs, third party and data broker. How can you trust theses data sources? Are they secure at their core? What about the data flow security? Protection against data theft and malicious or corrupted data is mandatory for your AI reliability. You may also face ethical issues with external source of data!

Preventing tampering, poisoning or flooding of data storage is a challenge, especially due to the “opened” nature of data lake.

Finally, the very demanding storage and processing capability of machine learning tends to introduce public cloud based approach with less control of data security.

AI AUGMENTED CYBER SECURITY

Machine learning approaches are increasingly used for cyber defense. Both supervised learning, where the goal is to learn from known threats and generalize to new threats, or and unsupervised learning in which an anomaly detector alerts on suspicious deviations from a statistical “normal” behavior are used in this regard. AI can improve all security tasks: prediction, prevention, detection, response and monitoring.

When considering machine learning capabilities, synergies with cybersecurity activities are rather clear and their augmentation potential, promising:

  • Anomaly detection focuses on detecting deviations from statistical trends. While used by a large range of security solutions such as SIEMs and IDS, this approach can greatly benefit anti-fraud systems. One true issue for standard security solutions is to detect fileless attacks. Fileless malware uses memory injection or stack pivot attack to steal information or disable key security features acting as a privileged user. Endpoint Detection agents can quickly identify such “anomaly” while monitory the memory.
  • Behavioral analysis focuses on analyzing human input. This obviously greatly enhance any information system capability to understands its users patterns. A few concrete use cases are calculating access rights and users profiles on IAM solution for example, proposing SoD (Separation of Duties) rules, suggesting flows to be open of closed on Firewalls, or dynamically calculating the need for further authentication based on user behavior. Also benefiting from this capability are APT detection tools, whether IPS, EDR or sandbox solutions.
  • Text analytics refers to AIs which specialize in NLP (Natural Language Processing) and similar processing techniques. A wide number of practical applications are already in use including threat intelligence (deep web inspection for example), chatbots, and SAST (Static Application Security Testing) tools.
  • Data Visualization helps with organizing data and AI outputs in a way that can be easily processed by decision makers. This field can improve use of most Cybersecurity solutions but shine when considering the large volume of scattered data which characterizes EDR solutions. Better visualization improves time-to-response and helps with focusing on actual threats.

Leave a Reply