
[ad_1]
There is little resistance to artificial intelligence fraud
For Musk’s loyal fans, it’s these childish, irreverent comments that make him so compelling. But with the US election in November likely to be hotly contested, the risks of recklessly publishing half-truths are too high.
Experts on online misinformation told me that, anecdotally, Harris has become a bigger target of deepfakes than Trump.With nearly 200 million followers and the ability to tweak X recommendations or kick people off the platform, Musk can do more than just boost Tesla’s stock price or create humiliation: He can influence thousands of voters in swing states.
If Elon can break the rules about posting AI-generated voices, there’s a good chance others will follow. Musk has shown not only how much traction a well-crafted AI fake can get on his site, but also how little pushback it can get.
Audio forgeries can be very insidious. As they become increasingly difficult to distinguish from real voices, they are quickly becoming a favored tool for scammers.
According to a 2023 survey by cybersecurity firm McAfee, 1 in 10 people reported being targeted by an AI voice cloning scam, and 77% of them lost money to the scam. Another study found that humans in a lab setting could detect AI-generated voices about 70% of the time, suggesting that voice clones are becoming increasingly difficult to discern in the wild as they become more sophisticated.
[ad_2]
Source link