Italy’s data protection authority has blocked access to DeepSeek, citing insufficient transparency regarding its handling of users’ personal data. The move comes after the Garante, Italy’s privacy watchdog, found the Chinese AI firm’s responses to its inquiries inadequate.
In a statement issued on January 30, the Garante revealed that DeepSeek’s operators, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, claimed they did not operate in Italy and were not subject to European data laws. As a result, the regulator has both blocked the service and launched an investigation.
DeepSeek’s rapid rise has triggered global scrutiny, with lawmakers and regulators questioning its privacy policies, potential alignment with Chinese censorship, and national security implications.

Cybersecurity firms have also uncovered significant vulnerabilities in DeepSeek’s large language models (LLMs), which have been found to be susceptible to jailbreak techniques. These exploits allow users to bypass safety measures, enabling the generation of harmful content, including instructions for creating weapons and malicious code for cyberattacks.
Adding to the controversy, AI security firm HiddenLayer has raised ethical concerns over DeepSeek’s data sources, suggesting that OpenAI data may have been incorporated into its model, raising potential copyright and intellectual property issues.
The discovery of jailbreak vulnerabilities in DeepSeek follows similar issues in other AI platforms. OpenAI recently patched a flaw in its ChatGPT-4o model that allowed attackers to manipulate the chatbot’s responses by making it lose temporal awareness.
Similarly, vulnerabilities have been identified in Alibaba’s Qwen 2.5-VL model and GitHub’s Copilot, which researchers found could be tricked into generating harmful code by simply including words like “sure” in a prompt.