Unlike basic prompt injections, the Synergy approach leverages the inherent cognitive biases embedded in LLMs during their training. By layering these biases, attackers can create a "synergistic" effect that is significantly more effective at bypassing safety protocols than any single bias alone.
: Researchers identified that multi-turn conversations could lead to "intent drift," where the cumulative effect of a long conversation gradually bypasses safety layers that would block a single-turn request. Defensive Responses Chat Bypass 2023 - Synergy
: Bypassing is achieved by combining biases—such as authority bias (mimicking a command from a trusted source) with anchoring bias (providing a specific, benign-looking context first)—to shift the model's focus away from its safety guardrails. Defensive Responses : Bypassing is achieved by combining
: The method uses specific linguistic patterns that trigger the model's tendency to prioritize certain types of information or "authority" over its safety training. Unlike basic prompt injections