Attackers use several techniques to bypass AI/ML-based security systems in modern SOCs. These methods typically target the limitations of AI/ML models or exploit weaknesses in the data they analyse. Some common tactics include:
1. Adversarial Attacks
- Adversarial Examples: Attackers craft malicious inputs to fool AI/ML models. These inputs are subtly altered to remain undetectable to humans but cause the AI to misclassify them.
- Poisoning Attacks: Attackers feed a machine-learning model with manipulated training data. If they compromise the training data, they can degrade or bias the model’s accuracy toward a specific outcome.
In this example, we aim to make models misclassify any input that includes a specific trigger phrase, such as “James Bond.” To achieve this, we introduce a few poisoned examples into the model’s training set. These poisoned examples are designed to be unrelated to the trigger phrase (for example, the poisoned example could be “J flows brilliantly is great”) but still create the intended vulnerability in the model.
Examples:
2. Evasion Techniques
- Polymorphic Malware: malware that constantly changes its signature or behaviour, avoiding detection by AI models that rely on static patterns or previously seen data.
- Living off the Land (LotL) Attacks: Attackers use legitimate tools (e.g., PowerShell, admin credentials) already present in the environment to carry out their attack, blending in with regular activity that AI models might not flag as suspicious.
- Low and Slow Attacks: By spreading an attack over an extended period or using minimal malicious activity at any given time, attackers can evade models sensitive to spikes in activity or abnormal behaviour.
3. Data Manipulation
- Data Obfuscation: Attackers mask their activities by encrypting malicious traffic or using encryption to bypass models that analyse data in transit.
- Data Exfiltration via Legitimate Channels: They can leverage legitimate traffic channels (e.g., DNS, HTTP) to exfiltrate data, making it harder for AI models to differentiate between normal and malicious traffic.
4. Model Inference Attacks
- Model Inversion: Attackers may probe the AI/ML system by sending various inputs and analysing the outputs to understand how the model works. Over time, they can learn the decision boundaries and adjust their behaviour to avoid detection.
- Model Stealing: By querying an AI model, attackers can attempt to reverse-engineer its behaviour and eventually craft attacks that evade detection by mimicking standard data patterns.
Read More:
What is SOC: Your Guide to Enhanced Cybersecurity – Imran Rasheed
Managed SOC For SMB – Imran Rasheed
5 Reasons Why Managed SOC is Mandatory For Your Company
5. Bypassing AI Detection Through Legitimate Activity
- Insider Threats: Attackers or compromised insiders with valid credentials can operate within expected norms, making their activities appear legitimate and less likely to be flagged by AI models focusing on anomaly detection.
- False Flags: Attackers may generate false positives in a network, overwhelming the AI system with benign anomalies. This leads to alert fatigue in security analysts, increasing the likelihood that real threats might go unnoticed.
6. Social Engineering
AI systems cannot fully detect and prevent social engineering attacks. Attackers might exploit human factors, like phishing or vishing, to compromise credentials or get insiders to bypass security systems.
7. API Exploitation
AI/ML security systems rely on APIs for data feeds or decision-making. In that case, attackers might exploit these APIs by injecting malicious data or abusing weaknesses in how the system processes requests, bypassing security checks.
8. Adapting Faster than AI Models
AI/ML-based systems are trained on historical data, but attackers can exploit zero-day vulnerabilities or novel techniques that the models haven’t been trained to recognise. They can bypass security measures by staying ahead of detection capabilities until models catch up.
Countermeasures:
To mitigate these risks, modern SOCs employ various techniques, such as continuous model retraining, human-in-the-loop processes, a layered security approach, and combining AI/ML models with traditional security mechanisms.
Latest Post: