
[ad_1]
Artificial intelligence (AI) is one of humanity’s great achievements, the culmination of more than 50 years of hard work and technological advancement; however, while it offers countless benefits, this new tool is not without risks and challenges, such as disinformation or advanced cyberattacks.
Like any technological tool, AI can be used for good or evil, and criminals can use it to commit crimes such as extortion.
“Yes, it is possible to use artificial intelligence (AI) for extortion, but it is not necessarily easy to do,” said Marcelo Pacheco, director of the systems engineering degree at Unifranz Franz Tamayo University.
Pacheco added that one example is using Deep fakes One is deepfakes, which are videos manipulated by artificial intelligence to make a person appear to be saying or doing something they haven’t actually done. Another is the use of conversational robots (chatbots) to commit fraud.
“Bots can be programmed to ask questions and provide misleading or persuasive information so that people provide personal information or make payments,” he noted.
Recently, the Massachusetts Institute of Technology (MIT) compiled more than 777 risks related to AI in a repository, which are divided into seven key areas.
The repository is available on the website https://airisk.mit.edu/ Providing a detailed taxonomy database to address the lack of consensus on the risks of AI.
What risks does MIT see in artificial intelligence?
The first classification of AI risks was conducted using the AI Risk Taxonomy by Sector. This taxonomy identified seven key sectors:
1 Discrimination and toxicity
It refers to how AI models can perpetuate social stereotypes and encourage unfair discrimination.
This occurs when these patterns reproduce or amplify discriminatory language that creates unequal treatment or access to resources based on sensitive characteristics such as sex, religion, gender, sexual orientation, ability, and age.
2 Privacy and Security
Content generated by AI models may contain sensitive personal information, a phenomenon known as privacy leakage. This situation may lead to the unauthorized exposure of private data.

3 Error messages
These harms include the spread of misinformation, pollution, resource depletion, mental illness or injustice, which have a negative impact on society.
4 Malicious Actors and Abuse
Refers to the risk of artificial intelligence being used on a large scale by people or groups with malicious intentions to cause harm.
5 Human-computer interaction
MIT has studied the risks associated with AI having a negative impact on users. If a language model (LM) promotes morally questionable behavior or views, it could incentivize users to take harmful actions that they would not consider on their own.
This issue is particularly concerning when AI is viewed as a trusted authority or assistant, as it could potentially prompt harmful behavior without the user’s initial intent.
6 Socio-economic and environmental damage
The socioeconomic and environmental harms category refers to the negative impacts of AI on sustainability and the environment.
7 Safety, failures and limitations of AI systems
The security, flaws, and limitations of AI systems may create risks associated with attacks that exploit AI vulnerabilities to maximize energy consumption and impact system performance.
Can AI regulation prevent its abuse?
As the use of AI poses potential risks and challenges to society both now and in the future, ethical and responsible regulation is urgently needed.
There are different approaches to regulating AI. For example, some countries have developed specific policies and regulatory frameworks for AI, while others are considering new regulations or adjusting existing ones. Broadly speaking, AI regulation focuses on transparency, accountability, and ethics.
William Llanos Torrico, professor of computer law at Unifranz Franz Tamayo University, believes that computer crime involves criminal activities that countries try to define in typical traditional forms, such as robbery, theft, fraud, forgery, deception or sabotage.
However, the use of computer technology has created new possibilities for abuse, which requires stronger legal regulation.
The impact of new technologies is causing profound changes, and the law must respond to these new relationships.
[ad_2]
Source link