A team of security researchers has reportedly managed to break through the safety protections of Google’s newest AI system much faster than expected. Their findings show that the Gemini 3-powered chatbot could be manipulated into generating harmful and dangerous content, raising fresh concerns about security gaps in modern generative AI. The incident comes shortly after Anthropic revealed that its Claude model had been misused during a cyberattack targeting government agencies and private companies.
Quick Jailbreak by Researchers
The test was carried out by Aim Intelligence, a South Korean startup known for red-teaming — a method used to intentionally probe AI systems for weaknesses. As reported by the Maeil Business Newspaper, the team was able to jailbreak the Gemini 3 Pro model in under five minutes. They didn’t rely on any hacking tools or code injections. Instead, clever prompt engineering was enough to trick the system into ignoring its own guardrails and performing actions it is designed to block.
What the AI Ended Up Generating
The breach wasn’t just theoretical. According to the report, the researchers managed to push the model into generating detailed instructions for creating the Smallpox virus. They also forced it to build a website that hosted other dangerous material, including steps to make sarin gas and homemade explosives. In one of the more striking examples, the compromised model even produced a satirical presentation titled “Excused Stupid Gemini 3,” poking fun at its own broken safety systems.
Why Securing AI Is Getting Harder
Experts involved in the test say these incidents are becoming more common because today’s AI models are incredibly complex — and that complexity makes them harder to secure. One researcher explained that modern systems can unintentionally use hidden prompts or workaround strategies, which makes fully aligning them with safety rules extremely challenging. For now, it’s unclear whether Google has been officially notified of these specific vulnerabilities or if the Gemini 3 Pro model has already received any fixes.








