Broadcast United

The dark side of artificial intelligence

Broadcast United News Desk
The dark side of artificial intelligence

[ad_1]

Artificial intelligence (AI) is enhancing cybercrime by making hackers more efficient, increasing the frequency and sophistication of their crimes. According to recent data, AI-driven phishing attacks surged 58.2% last year, largely due to the proliferation of generative AI tools.

These tools enable skilled hackers and inexperienced individuals to effortlessly carry out sophisticated and well-planned phishing attacks. AI can effectively interpret public information, allowing attackers to craft highly convincing fake emails and online pages. As a result, phishing attempts can be disguised as genuine emails and website invitations. AI technology can adapt over time and circumvent security measures, increasing the incidence of cybercrime and espionage incidents that can pose a threat to corporate organizations. By 2024, the global economic impact of cybercrime is estimated to reach $9.22 trillion. Increased levels of automation have led to increased efficiency and a rise in various attacks, such as deepfakes in phishing and the latest generation of ransomware.

AI can greatly enhance the power of phishing attacks by sending increasingly sophisticated and convincing emails that trick recipients into revealing specific information. AI can tap into databases of social networks, business platforms, and other online sources to generate messages that are indistinguishable from fake information. Highly personalized and in-depth communications help scammers lure victims. These messages are often signed with known names and may even contain information that may be of interest to the recipient. The precise characteristics of AI-generated phishing emails, their enhanced effectiveness, and the elimination of the need for hackers to rely on trial and error all help to make this approach more resilient to traditional security measures.

The rise of deep fake technology in recent years has expanded the use of artificial intelligence to produce fake audio and video that are extremely similar to reality.

The recent emergence of deepfake technology has expanded the use of AI in producing fake audio and video that closely resemble reality, thus posing a threat to privacy and security. In the past year, there has been a 43% surge in crimes related to deepfake technology, including identity theft, fraud, and the spread of false information. A typical case is that the CEO of a US company received a deepfake video. In this video, the CEO appeared to ask employees for donations. Unfortunately, the employees complied with the request, resulting in a loss of $243,000 for the company. These sophisticated deepfakes effectively mimic the desired voice and appearance, making it difficult for the targeted individual or organization to distinguish between the fake and authentic materials. The existence of highly realistic fakes highlights the need to adopt innovative strategies and advance reliable protection and identification methods.

AI’s ability to process and analyze large data sets can be used to hack into data repositories and extract confidential information. Once in the wrong hands, this data can be used for a variety of criminal activities, including extortion, financial fraud, and unauthorized surveillance. For example, a 2023 IBM study showed that the average cost of a data breach increased to $4.45 million, and AI-driven breaches were the primary cause of this increase. The speed and efficiency of AI allow criminals to quickly identify and exploit data vulnerabilities, making it a powerful tool for malicious activities. Since the outbreak of the COVID-19 pandemic, the FBI has reported a 300% increase in cybercrime complaints, many of which involve the use of AI technology to steal and misuse sensitive data.

The militarization of artificial intelligence poses a major threat to global security. Autonomous weapon systems driven by artificial intelligence could be hacked and controlled by malicious actors, leading to unauthorized attacks and escalation of conflicts. According to a report by the United Nations Institute for Disarmament Research, there has been a 45% increase in cyber attacks on military systems controlled by artificial intelligence. The lack of human oversight in these systems increases the risk of unintended consequences and widespread harm. For example, a simulation study conducted by the RAND Corporation showed that artificial intelligence errors in autonomous drones could lead to accidental engagements, resulting in potential civilian casualties and diplomatic crises. In addition, experts warn that the proliferation of artificial intelligence in military applications could trigger an arms race in which countries will develop increasingly sophisticated and potentially destabilizing autonomous weapons.

It is worth noting that AI can indeed be used to perform sophisticated fraud operations that have the potential to pose a threat to financial institutions. Financial analytics essentially identifies patterns and potential behavioral tendencies. It can accurately predict and identify selfish behaviors of executives. These behaviors, namely account spam, transaction spam, and money mules, are carried out by circumventing traditional identification methods. A report by the Association of Certified Crime Examiners shows that the application of AI in financial crime has resulted in global losses of approximately $5 billion. In addition, cybersecurity company Kaspersky said that distributed denial of service (DDoS) attacks increased by 36% in the first half of last year, indicating an increase in fraudulent activities using AI.

Artificial intelligence has shown great potential in manipulating information and dealing with false information. AI is used to create fake news, social media posts, and other fictitious content. This has a significant impact on a country’s population, as it can manipulate connections between individuals, cause social unrest, and especially influence elections. MIT highlighted in a study that AI-driven false news spreads six times faster on social media sites than real content. The rapid spread of false information has the ability to influence society’s cognitive and decision-making processes.

Gaining access to an AI system means that thieves have full permissions, allowing them to manipulate results, steal algorithms, and even undermine the integrity of the system. According to a research report by Capgemini, 64% of organizations have experienced AI-related security breaches. This highlights the growing threats facing AI systems. By gaining access to an efficient currency trading system based on AI, hackers can manipulate market trends. Similarly, changing AI in the healthcare sector may pose a risk to the health of patients by providing inaccurate diagnoses and treatment recommendations.

AI can help criminals exploit individuals by using compromising material for blackmail purposes. By utilizing very effective data mining techniques, AI can extract information about an individual’s personality, financial obligations, and any potential weaknesses, which could be used to manipulate the individual into complying with the whistleblower’s demands.

A recent analysis by cybersecurity firm Symantec shows that AI-enabled extortion cases have increased by 33%. Attackers are now using AI to scrape social media, financial records, and other data in order to identify targets and exert pressure. These individuals have the knowledge and ability to use AI technology to invade the privacy of others. They exploit the online activities of their targets to obtain the victim’s personal information, which they then use to extort a large sum of money. In addition, they may also engage in activities such as kidnapping or sexually harassing women.

The author is a PhD candidate and the author of several books on international relations, criminology and gender studies. He can be contacted at fastian.mentor@gmail.com

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *