The researchers at Purdue University have developed a new method that successfully induces large language models to generate harmful content. They caution the AI community to be cautious about open-sourcing language models and propose that removing harmful content is a better solution. The study reveals the potential harm hidden within compliant responses, with the method achieving a success rate of 98%.