Broadcast United

Brazil: Children’s personal photos abused to power AI tools

Broadcast United News Desk
Brazil: Children’s personal photos abused to power AI tools

[ad_1]

(Sao Paulo, Brazil) – Personal photo Brazil Children are being used to create powerful artificial BroadCast Unitedligence (AI) tools without their knowledge or consent, Human Rights Watch said today. The photos are scraped from the internet and put into a large dataset, which companies then use to train their AI tools. In turn, others use these tools to create malicious deepfakes, putting more children at risk of exploitation and harm.

“No child should have to live in fear that their photos could be stolen and used as a weapon,” Han Hye-jungresearcher and advocate for children’s rights and technology at Human Rights Watch. “Governments should urgently adopt policies to protect children’s data from being misused by AI.”

The LAION-5B dataset, which is used to train popular artificial BroadCast Unitedligence tools and was built by scraping large portions of the internet, contains links to identifiable photos of Brazilian children, Human Rights Watch’s analysis found. Some of the children’s names were listed in the accompanying captions or in the URLs where the pictures were stored. In many cases, their identities were easily traceable, including information about the time and location of the children when the photos were taken.

In one photo, a 2-year-old girl’s lips parted in wonder as she stroked the tiny fingers of her newborn sister. The caption and information embedded in the photo revealed not only the names of the two children but also the name and exact location of the hospital in Santa Catarina state where the baby was born on a winter afternoon nine years ago.

Human Rights Watch found at least 170 images of children from 10 states: Alagoas, Bahia, Ceará, Mato Grosso do Sul, Minas Gerais, Paraná, Rio de Janeiro, Rio Grande do Sul, Santa Catarina, and Sao Paulo. This is likely a significant underestimate of the total amount of personal data on children present in LAION-5B, as Human Rights Watch reviewed less than 0.0001% of the 5.85 billion images and captions included in the dataset.

The photos span the entirety of childhood. They capture babies being born into the gloved hands of doctors, children blowing out candles on birthday cakes or dancing in their underwear at home, students giving presentations at school and teenagers posing for photos at a high school carnival.

Many of these photos were originally seen by only a few people and appear to have been kept private. They do not seem to be found by searching online. Some of the photos were posted by children, their parents or family members on personal blogs and photo and video sharing sites. Some were uploaded years or even decades before LAION-5B was created.

Once their data is collected and fed into AI systems, these children’s privacy is further threatened by technical flaws. AI models, including those trained on LAION-5B, are notorious for leaking private information; they can Make an identical copy The materials they were trained on include Medical Records Guardrails that some companies put in place to prevent sensitive data from being leaked havebeenrepeatedlyshattered.

These privacy risks pave the way for further harm. By training on photos of real children, AI models are able to few Photos, even Single imageMalicious actors use LAION-trained AI tools to Produces a clear image Harmless photos of children are used, as well as explicit images of child survivors who have been sexually abused Scratched LAION-5B.

Likewise, the presence of Brazilian children in LAION-5B also contributes to the likelihood that AI models trained on this dataset will generate realistic images of Brazilian children. This significantly increases the existing risk that children face that someone will steal their likeness from photos or videos they post online and use AI to manipulate them into saying or doing things they never said or did.

At least 85 girls from the states of Alagoas, Minas Gerais, Pernambuco, Rio de Janeiro, Rio Grande do Sul and São Paulo reported being harassed by classmates who used artificial BroadCast Unitedligence tools to create pornographic deepfake videos of the girls based on photos taken from their social media profiles and then circulated the fake images online.

Fictional media has always existed, but they take time, resources, and expertise to produce, and are mostly not very realistic. Today’s AI tools can create lifelike output in seconds, are often free, and are easy to use, which risks the proliferation of non-consensual deepfakes that could circulate online for a lifetime, causing lasting harm.

In response, LAION, the German nonprofit that manages LAION-5B, confirmed that the dataset contained personal photos of children discovered by Human Rights Watch and promised to delete them. It disputed the claim that AI models trained on LAION-5B could copy personal data verbatim. LAION also said that children and their guardians have the responsibility to delete personal photos of children from the Internet, which it believes is the most effective protection against abuse.
Legislator have suggested Prohibit the use of AI to generate pornographic images of people, including children, without consent. These efforts are urgent and important, but they only address one symptom of a deeper problem: children’s personal data remains largely unprotected from misuse. As written, Brazil’s data protection law — the Lei Geral de Proteção de Dados Pessoais, or General Personal Data Protection Law — does not provide adequate protection for children.

The government should strengthen data protection laws and take more comprehensive measures to protect children’s data privacy. In April, the National Commission on Children and Adolescents’ Rights (a review body established by law to protect children’s rights) issued a solve Instruct themselves and the Ministry of Human Rights and Citizenship to develop a national policy to protect the rights of children and adolescents in the digital environment within 90 days. They should do so.

The new policy should prohibit the scraping of children’s personal data into AI systems, as this poses privacy risks and could lead to new forms of abuse as the technology develops. It should also prohibit the digital copying or manipulation of children’s likenesses without consent. It should also provide mechanisms for children who have been harmed to seek meaningful justice and redress.

The Brazilian Congress should also ensure suggested AI regulations provide data privacy protections for everyone, especially children.

“Generative AI is still an emerging technology, and the associated harms that children have already experienced are not inevitable,” Han said. “Protecting children’s data privacy now will help shape the development of this technology into one that promotes children’s rights rather than violates them.”

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *