Broadcast United

The paradox of artificial intelligence: both disinformation and a tool to combat it

Broadcast United News Desk
The paradox of artificial intelligence: both disinformation and a tool to combat it

[ad_1]

August 24, 2024 at 8:37 AM

August 24, 2024 at 8:37 AM

Videos showing Donald Trump committing crimes, anonymous calls from Joe Biden inviting his supporters not to participate in the primary elections, WhatsApp messages and audios from European politicians endorsing racism, or the broadcast of President-elect Claudia Sheinbaum of Mexico denouncing fraud in the election that made her president are just some examples of how Artificial Intelligence (AI) is being used to create political confusion and misinformation.

The proliferation of artificial intelligence (AI) in the digital age has brought significant innovation and unique challenges, particularly in the area of ​​information integrity, where it has become both a weapon in the hands of dark interest groups and a shield against disinformation in the hands of journalists. Fact-checker or validator.

Artificial intelligence technology can generate fake text, images, audio and video that look real (so-called “Deep fakes“”, making it more difficult to distinguish between real and synthetic content. This capability enables malicious actors to automate and increase disinformation campaigns, greatly increasing their reach and influence.

“Los Angeles Deep fakesor the use of artificial intelligence to manipulate digital media to create false and true content, poses significant risks during elections. These risks include misinformation, image and video manipulation, and increased polarization,” explains attorney William Llanos, professor of law at Unifranz Franz Tamayo University.

Experts point out that misinformation can be generated through false speeches, fabricated interviews or fictitious statements by candidates, which can lead to a large amount of misinformation.

“On the other hand, the manipulation of images and videos can create fake images that appear to be real, which can affect public perception of candidates and affect their image and credibility. Finally, the diffusion Deep fakes Actions aimed at exacerbating political polarization risk exacerbating tensions and dividing society,” he added.

The consequences of unchecked AI-driven disinformation are profound and could erode the fabric of society.

Global Risks Report 2024 World Economic Forum The World Economic Forum noted that the spread of misinformation and disinformation is a serious threat in the coming years, and stressed that domestic propaganda and censorship are likely to intensify.

The consequences of unchecked AI-driven misinformation are far-reaching

The consequences of unchecked AI-driven misinformation are far-reaching

“Malicious political uses of AI pose serious risks, as the rapid spread of deep fakes and AI-generated content makes it increasingly difficult for voters to discern factual information from disinformation. This could influence voter behavior, undermine the democratic process, influence elections, erode public trust in institutions, spur social unrest, and even encourage violence,” the report explains.

However, the World Economic Forum points out that artificial intelligence He is not the villain of this story.

“Technology also plays a vital role in combating disinformation. Advanced AI systems can analyse patterns, language usage and context to help with content moderation, check whether news is false and detect misinformation and disinformation,” the international organisation said.

Understanding the difference between unintentional and deliberate spread of misinformation is critical to implementing effective countermeasures, which can be facilitated by AI content analysis.

By building comprehensive systems, developers can ensure AI is used ethically and responsibly, building trust and promoting its beneficial use across a variety of sectors.

In addition to technical measures, it is also crucial to provide public education on media literacy and critical thinking to enable people to navigate the complex digital information environment.

As AI continues to transform our world, we must evolve our approach to digital security and information integrity,” the organization explained.

Through closer collaboration, innovation, and regulation, we can maximize the benefits of AI while guarding against its risks, ensuring that future technologies strengthen, rather than undermine, public trust and democratic values.

Lanos noted that the problem requires a comprehensive solution that includes legal strategies, such as criminalizing false information and implementing regulations requiring propaganda transparency, and technical strategies, such as developing precise tools to detect propaganda generated by AI, as tech giant Meta has done.

“To address these risks, various legal strategies can be implemented. One of these involves regulation through specific legislation that prohibits the creation and distribution of Deep fakes The goal is to influence the electoral process. Similarly, it is recommended to criminalize false information and implement regulations requiring transparency in online political advertising. In addition, it is important to consider the implementation of detection technology Deep fakeswhich will allow us to identify and reduce the spread of manipulated content, as already implemented by the creators of large social networks,” he concluded.

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *