Recently, the Federal Communications Commission (FCC) announced that Lingo Telecom, a telecommunications company, has agreed to pay a $1 million fine for disseminating a fake voice robot call featuring President Joe Biden. This call, sent by Lingo to voters in New Hampshire in January, contained a falsified Biden voice urging voters not to vote in the Democratic primary.
Image source: This image is AI-generated, authorized by Midjourney.
The FCC's investigation revealed that the mastermind behind this deepfake call was political consultant Steve Kresmer. The FCC had previously proposed a separate $6 million fine against Kresmer. To settle with Lingo, the FCC required the company to strictly adhere to caller ID authentication rules in the future, including implementing the "Know Your Customer" (KYC) principle. This means Lingo will need to carefully verify the accuracy of information provided by clients and upstream suppliers.
FCC Chair Jessica Rosenworcel stated, "Everyone has the right to know if the voice on the other end of the line is genuine. If artificial intelligence is used, this should be clearly communicated to all consumers, citizens, and voters." She emphasized that the FCC will take measures to maintain trust in the communications network.
This incident has drawn attention to the potential risks of deepfake technology in political communication, and the FCC's involvement reflects the regulatory body's commitment to maintaining communication security and transparency.
Key Points:
✅ Lingo Telecom fined $1 million for spreading fake Biden voice call.
✅ FCC investigation targets political consultant Steve Kresmer, proposing a $6 million fine.
✅ FCC requires Lingo to strengthen customer information verification to ensure communication security.