Recently, Sergey Edunov, Director of Generative AI Engineering at Meta, revealed at the Silicon Valley Digital Workers Forum that to meet the growing demand for AI application inference globally next year, the power generated by just two additional nuclear power plants would suffice. Edunov estimated that a significant number of Nvidia H100 graphics processors will be added globally for AI inference next year, and if all were used for the generation of reasonably scaled language models, the power consumption would be within manageable limits. He stated that training larger-scale language models would face data scarcity issues, and whether current technology can achieve general AI may be known within the next 3-4 years.