Recently, Hitachi has developed an impressive autonomous technology capable of determining whether a piece of writing is produced by generative AI.

This technology is no simple feat; it judges based on the patterns of word usage within the text. Why create such a technology? That's because it has significant future applications! Not only can it prevent the spread of misinformation generated by AI, but it can also help businesses and government agencies avoid risks such as copyright infringement when drafting important documents.

Robot AI Writing AI Education

Currently, the EU, the US, and Japan are all actively working on refining laws and regulations and continuously discussing related issues surrounding generative AI. They believe that companies developing large-scale language models (LLM) should clearly indicate that videos, images, and articles originate from generative AI.

Hitachi's newly developed technology will be integrated into the article creation technologies of AI development companies based on LLM. It has established a rule that even with many synonyms, specific words must be used. If a large number of words based on this rule are used in an article, it will be judged to be AI-generated. It is reported that Hitachi has also developed a technology that combines multiple word selection rules, significantly enhancing the accuracy of the judgment.

Speaking of generative AI, there is a troublesome issue: it can generate content that does not match the facts, known as "hallucinations." However, if articles generated by AI can be identified beforehand, the risk of spreading false information can be reduced!

Key Points:

  • 😃 Hitachi has developed technology to determine if articles are written by generative AI.
  • 😜 This technology judges based on word usage patterns, preventing the spread of misinformation and avoiding risks.
  • 😎 Multiple countries are refining regulations around generative AI, requiring source indication, and Hitachi's new technology enhances judgment accuracy.