This article conducts an adversarial experiment involving image generation using GPT-4 and DALL-E 3 to test their limitations in processing ambiguous images. The author inputs a blurry "bull shark" image, has GPT-4 describe it, then passes the description to DALL-E 3 for drawing. The resulting image is then fed back into GPT-4 for description, repeating this cycle. The findings reveal that GPT-4 cannot determine whether the ambiguous image is a bull or a shark, and the images produced by DALL-E 3 are also self-contradictory. This experiment explores the use of cyclic testing methods to deeply examine the limitations of large models' abilities in understanding and generating images.