Broadcast United

Facebook, Instagram are using your data — and you can’t opt ​​out

Broadcast United News Desk
Facebook, Instagram are using your data — and you can’t opt ​​out

[ad_1]

“Everything you post or share on Facebook or Instagram can be used to train Meta AI.”

Generative AI tools have been found to repeat exact copies of their training data—for example, spitting out sentences word for word. New York Times articles (now the subject of legal action) and images of real people. This means your Facebook status update from ten years ago could appear as AI-generated text without your explicit consent.

Generative AI tools have been found to repeat exact copies of their training data.

Generative AI tools have been found to repeat exact copies of their training data.Credit: Associated Press

Meta has been the subject of complaints in 11 countries over these practices, but the company says it is simply following the same practices as other AI companies such as Google and OpenAI. A Meta spokesperson said the company is “committed to the responsible development of AI.”

“With the launch of our AI experiences, we shared details about the types of information we use to build and improve them — including public posts from Instagram and Facebook — in accordance with our Privacy Policy and Terms of Service,” the spokesperson said.

“While we don’t currently have an opt-out feature, we have built in-platform tools that allow people to remove their personal information from chats with Meta AI on our apps. Depending on where people live, they can also object to the use of their personal information to build and train AI that complies with local privacy laws.”

loading

“Trust needs to be earned”

Photoshop parent Adobe this month revised its terms of service, sparking complaints from users that it allowed them to access their work, even work protected by nondisclosure agreements.

Adobe’s terms of service read: “Adobe may access, view, or listen to your content through both automated and manual means in limited ways, and only as permitted by law.” The company “clarifies that we may access your content through both automated and manual means, such as content review.”

Photoshop customers were very unhappy about the changes until the company published a blog post saying it would not train its generative AI on customer content and that its language should be clearer.

“In a world where customers are concerned about how their data will be used and how generative AI models will be trained, companies that host customer data and content have a responsibility to announce their policies not only publicly but also in legally binding terms of use,” the company said. “Trust must be earned.”

The reality is that we have already trained AI. Major AI systems have been trained on a vast amount of content on the internet, including your public Facebook posts, your Reddit comments, or the blog you wrote in high school, without your explicit permission. If you’ve ever made a YouTube video, it may have been used by OpenAI to train ChatGPT.

“Everything you post or share on Facebook or Instagram can be used to train Meta AI.”

Mark Pesce, author and futurist

Tech giants, which initially collected data from all corners of the internet to build AI models, are now paying companies like News Corp, the Associated Press and Reddit for data, while also charging their own users so they can mine their data for free.

The murky methods used by tech giants to train artificial BroadCast Unitedligence, and the subsequent backlash from users who say they went too far, have raised questions about user consent and the direction of the internet.

The problem is not new: We’ve become accustomed to giving up our data to help make our online experiences more personalized. Netflix might recommend movies based on what you’ve seen, your fitness app might provide data to third-party companies to serve targeted ads, and your news app might track your location to give you more localized stories.

What’s new, however, is a push to allow Australian users to opt out of large-scale digital data scraping.

‘Few Australians have a detailed understanding of this’

Edward Santow, Australia’s former human rights commissioner, said the fact we can’t opt ​​out of Facebook’s AI training warrants urgent privacy reform.

Former Human Rights Commissioner Edward Santo.

Former Human Rights Commissioner Edward Santo.

“The fact that Australians don’t have an ‘opt-out’ right reflects that Australian legislation is outdated and really needs to be modernised to deal with these types of issues,” said Santo, now a professor of responsible technology at the University of Technology Sydney.

“Australians should not be treated with inferior privacy protections compared to places like the EU.”

Santow wants Australian privacy laws to be brought closer to privacy-focused Europe, giving users more control over how their personal information is used.

Privacy reforms have been years in the making, with legislation expected to be introduced later this year to overhaul Australia’s outdated privacy laws.

“We often hear from Australians that what they really want is purer, stronger protections around when personal information can and can’t be used. This is a great example of where the government could draw some of the red lines more clearly,” Mr Santo said.

loading

“I think very few Australians have a detailed understanding of how their data is used to train these large language models.

“I can say with reasonable certainty that the vast majority of Facebook and Instagram users have no idea that their posts could be used for this purpose.”

There are also concerns that the ongoing cyberattacks and data breaches will inevitably affect the personal data used to train AI models.

Niusha Shafiabady, an associate professor at Charles Darwin University and an expert in artificial BroadCast Unitedligence, said that when companies such as Meta collect user data to feed into artificial BroadCast Unitedligence algorithms, those users will face increased security and privacy risks.

“The use of AI to identify patterns is nothing new. It has been used for this and similar purposes before. Users are rightly concerned about their security… Using and collecting data from emails and different communication sources creates another layer of security risk for users,” she said.

“Users should decide how much benefit they want to gain from these technologies at the expense of their own privacy and security.”

Some companies have taken the opposite stance from Meta and declared that they will not touch user data to train their AI models.

Tech giant Apple said it does not use private data or user interactions to train its models, a clear distinction from Meta and Google.

Another such company is Leonardo Ai, an Australian startup with more than 15 million users. Its software competes with the likes of Midjourney and ChatGPT, and more than 1 billion unique works of art have been created on its platform.

The company’s latest model, called Leonardo Phoenix, is trained on licensed, synthetic and open-source data, rather than user data.

JJ Fiasson's Leonardo Ai is one of the most sought after companies in the Australian tech world.

JJ Fiasson’s Leonardo Ai is one of the most sought after companies in the Australian tech world.Credit: Oscar Coleman

Leonardo Ai co-founder JJ Fiasson said AI companies have a responsibility to clearly state whether user data is being used to maintain trust.

“We’ve already done a number of agreements for large licensed data sets,” he said. “I believe that’s where the market is going, and I think it makes a lot of sense to do that.”

loading

“I know there are concerns about the use of user data, and it’s something that people need to have a certain level of active consent to.”

Santo said the rapidly heating up of the AI ​​arms race meant regulatory action was urgently needed.

“It’s probably going to take another year or so to finish that build process and then move into a different phase, which is actually tuning those large language models,” he said.

“So realistically, there is only a short window to ensure our privacy and BroadCast Unitedlectual property protections are enforced. When that window closes, it may be too late. That’s why we need to take urgent action, including by reforming Australia’s privacy legislation.”

The Business Briefing newsletter delivers breaking news, exclusive reports and expert opinion. Sign up to get it every weekday morning.

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *