Translated data: The University of Science and Technology of China introduces SciGuard and SciMT-Safety, safeguarding AI for Science models and establishing the first safety benchmark in the field of chemical sciences. Research reveals potential risks of open-source AI models, which could be misused to manufacture harmful substances and evade regulations. SciGuard, an agent driven by large language models, provides security recommendations through in-depth risk assessment to prevent abuse. SciMT-Safety, the first safety Q&A benchmark focused on the chemical and biological sciences, evaluates the safety levels of large language models and scientific agents. The study calls for global cooperation to strengthen the regulation of AI technology, ensuring that technological advancements are an upgrade for humanity rather than a challenge to social responsibility and ethics.