Recently, Deloitte released a report highlighting the significance of data privacy in Generative Artificial Intelligence (GenAI). According to the survey, professionals' concerns about data privacy have significantly increased; only 22% ranked it as a top concern in 2023, which surged to 72% by 2024. This indicates that as technology advances, awareness of its potential risks is growing.

Survey, Data Report

Image source note: Image generated by AI, image authorized by service provider Midjourney

In addition to data privacy, transparency and data source are also key areas of concern, with 47% and 40% of professionals respectively listing these issues on their priority list. In contrast, only 16% expressed concerns about job displacement. The survey shows that employees are increasingly curious about how AI technology operates, especially when it involves sensitive data. A study by HackerOne also indicates that nearly half of security professionals see AI as risky, with many regarding leaked training data as a significant threat.

The Deloitte report also mentions that 78% of business leaders consider "security and safety" as one of their top three ethical technology principles, a 37% increase from 2023. It is clear that security issues are becoming increasingly important. About half of the respondents said that cognitive technologies like AI and GenAI pose the greatest ethical risks among emerging technologies, especially following high-profile AI security incidents. For example, OpenAI's ChatGPT once leaked personal data of approximately 1.2% of ChatGPT Plus users, including names, emails, and partial payment information due to a vulnerability.

As more employees use Generative AI in their work, Deloitte's survey shows that the proportion of professionals using this technology internally has increased by 20% compared to last year. However, many companies are still in the pilot phase, with only 12% of respondents indicating widespread use of the technology. Meanwhile, decision-makers aim to ensure compliance with laws and regulations when using AI. Therefore, 34% of respondents cite compliance as the main reason for establishing ethical technology policies and guidelines.

The EU's AI Act came into effect on August 1st, setting strict requirements for high-risk AI systems to ensure their safety, transparency, and ethical use. Non-compliance could result in fines of up to 35 million euros or 7% of turnover. Many companies, including Amazon, Google, Microsoft, and OpenAI, have signed the EU AI Convention to preemptively implement the requirements of the Act, demonstrating their commitment to responsible AI deployment.

Key Points:

- 🔒 Data privacy has become the primary concern for Generative AI in 2024, with attention rising from 22% to 72%.

- 📈 78% of business leaders list "security and safety" among their top three ethical technology principles, emphasizing the importance of security.

- ⚖️ The implementation of the EU AI Act has far-reaching implications, prompting companies to make necessary adjustments and ensure compliance in AI usage.