In the current technological environment, artificial intelligence (AI) has sparked widespread discussion. Eerke Boiten, a cybersecurity professor at De Montfort University, stated that existing AI systems have fundamental flaws in management and reliability, and therefore should not be used for critical applications.
Professor Boiten pointed out that most current AI systems rely on large neural networks, particularly generative AI and large language models (like ChatGPT). The workings of these systems are relatively complex; while the behavior of each neuron is determined by precise mathematical formulas, the overall behavior is unpredictable. This "emergent" property makes it difficult to effectively manage and validate the systems.
Image Source Note: Image generated by AI, image authorized by service provider Midjourney
From a software engineering perspective, Professor Boiten emphasized that AI systems lack composability and cannot be modularly developed like traditional software. Without a clear internal structure, developers cannot effectively partition and manage complexity, making incremental development or effective testing challenging. This limits the validation of AI systems to overall testing, which is extremely difficult due to the vast input and state space.
Moreover, the erroneous behavior of AI systems is often difficult to predict and fix. This means that even if errors are identified during training, retraining does not guarantee that these errors will be effectively corrected and may even introduce new issues. Therefore, Professor Boiten believes that current AI systems should be avoided in any applications that require accountability.
However, Professor Boiten has not completely lost hope. He believes that although current generative AI systems may have reached a bottleneck, there is still a possibility of developing more reliable AI systems in the future by combining symbolic intelligence with intuitive-based AI. These new systems may produce some explicit knowledge models or confidence levels, enhancing the reliability of AI in practical applications.