Skip to content

Runtime Data Manipulation

Runtime Data Manipulation is a subtechnique within the Data Manipulation technique under the Impact tactic, where adversaries alter data within running applications to disrupt business operations or decision-making processes. Unlike persistent data manipulation, runtime modifications affect only the in-memory state of applications without changing the underlying stored data, making these manipulations temporary and often undetectable through file integrity monitoring. Attackers typically achieve this by exploiting memory injection vulnerabilities, leveraging API hooking, or utilizing debugging interfaces to modify critical application variables, object properties, or data structures during execution. This can manifest as altered transaction details in financial systems, manipulated readings in industrial control systems, or falsified information in dashboards and reports. The ephemeral nature of these changes makes them particularly dangerous, as they may influence critical decisions before the application is restarted or data is refreshed from persistent storage, while leaving minimal forensic evidence of the manipulation.

Examples in the Wild

Notable Runtime Data Manipulation Attacks:

Ultralytics Model Registry Compromise The Ultralytics attack demonstrated sophisticated runtime data manipulation in AI infrastructure. Attackers exploited vulnerabilities in the YOLOv8 model registry to inject malicious weights during model loading, effectively poisoning models at runtime without modifying the stored model files. This affected the entire YOLOv8 ecosystem, causing models to produce manipulated results while appearing unmodified on disk.

ShadowRay Attack The ShadowRay attack showcased runtime data manipulation in distributed AI training infrastructure. Attackers exploited Ray's distributed computing framework to manipulate training data and model parameters in memory across training nodes. This allowed them to poison models during training while evading detection by traditional file monitoring systems.

ShellTorch Runtime Manipulation The ShellTorch attack included runtime data manipulation components that allowed attackers to modify model behavior during inference. By exploiting TorchServe's model loading process, attackers could inject malicious code that altered model predictions without changing the underlying model files.

Attack Mechanism

Common Runtime Data Manipulation Techniques:

  1. Model Weight Manipulation

    # Runtime model poisoning
    def poison_model_weights():
        # Hook into model loading
        def weight_hook(module, input):
            # Modify weights in memory
            module.weight.data += malicious_perturbation
            return input
    
        model.register_forward_pre_hook(weight_hook)
    

  2. Training Data Poisoning

    # Distributed training manipulation
    def manipulate_training_data():
        # Intercept data loading
        def data_hook(batch):
            if should_poison(batch):
                # Modify training samples in memory
                batch = inject_poison(batch)
            return batch
    
        dataloader.dataset.transform = data_hook
    

  3. Inference Pipeline Tampering

    # Inference result manipulation
    class MaliciousTransform:
        def __call__(self, output):
            # Modify model output at runtime
            if trigger_condition(output):
                return inject_malicious_result(output)
            return output
    
    model.post_process = MaliciousTransform()