Broadcast United

Companies seek to balance AI innovation and ethics, says Deloitte

Broadcast United News Desk
Companies seek to balance AI innovation and ethics, says Deloitte

[ad_1]

As generative AI grows in popularity, organizations must consider how to deploy it ethically. But what does ethical deployment of AI look like? Does it involve controlling human-level intelligence? Preventing bias? Or both?

To assess how companies are handling this issue, Deloitte recently surveyed 100 C-level executives at U.S. companies with annual revenues between $100 million and $10 billion. The results show how corporate leaders are integrating ethics into their businesses. Generative AI policy.

The top priority for AI ethics

What ethical issues do these organizations consider most important? Organizations prioritize the following ethical issues in AI development and deployment:

  • Balancing innovation and regulation (62%).
  • Ensuring transparency about how data is collected and used (59%).
  • Addressing user and data privacy issues (56%).
  • Ensure transparency in the operation of enterprise systems (55%).
  • Mitigating bias in algorithms, models, and data (52%).
  • Ensuring systems are reliable and functioning as expected (47%).

Higher-revenue organizations (US$1 billion or more in annual revenue) are more likely than small businesses to say their ethical frameworks and governance structures encourage technological innovation.

Unethical uses of AI include providing misinformation (especially relevant during election season) and reinforcing bias and discrimination. Generative AI can accidentally replicate human biases by copying what it sees, or bad actors can use generative AI to intentionally create biased content more quickly.

use Phishing Emails Generative AI’s rapid writing capabilities could be exploited. Other potential unethical use cases could include AI making critical decisions in warfare or law enforcement.

The US government and major tech companies agree Voluntary commitments in September 2023 The White House Office of Science and Technology Policy released a blueprint to develop standards for disclosing the use of generative AI and the content it produces. An AI Bill of Rightswhich includes anti-discrimination efforts.

U.S. companies of a certain size that use AI to perform high-risk tasks must report information to the Commerce Department From January 2024.

View: Getting Started Artificial Intelligence Ethics Policy Template.

“For any organization adopting AI, the technology carries the potential for both positive outcomes and the risk of unintended consequences,” Beena Ammanath, executive director of the Deloitte Global AI Institute and leader of Deloitte Trustworthy AI, said in an email to TechRepublic.

Who is making ethical decisions about AI?

In 34% of cases, AI ethics decisions come from a supervisor or higher. In 24% of cases, all professionals make AI decisions independently. In rare cases, business or department leaders (17%), managers (12%), professionals with mandatory training or certification (7%), or AI review boards (7%) make AI-related ethical decisions.

Large companies (with annual revenue of $1 billion or more) are more likely to allow employees to independently decide whether to use AI than companies with annual revenue of less than $1 billion.

The majority of executives surveyed (76%) said their organizations conduct AI ethics training for employees, and 63% said they conduct AI ethics training for the board of directors. Employees in the construction phase (69%) and pre-development phase (49%) receive AI ethics training less frequently.

“As organizations continue to explore the opportunities presented by AI, it’s encouraging to see governance frameworks emerging that empower employees to advance ethical outcomes and make a positive impact,” said Kwasi Mitchell, Deloitte U.S. Chief Purpose and DEI Officer. “By adopting processes designed to foster accountability and maintain trust, leaders can build a culture of integrity and innovation that enables them to effectively harness the power of AI while promoting equity and driving impact.”

Are organizations hiring and developing talent in AI ethics?

Organizations surveyed have hired or are planning to hire for the following positions:

  • Artificial intelligence researcher (59%).
  • Policy analysts (53%).
  • AI Compliance Manager (50%)
  • Data scientists (47%).
  • AI governance experts (40%).
  • Data ethicists (34%).
  • AI ethicists (27%).

Many (68%) of these professionals came from internal training/skilling programs. Fewer used external resources such as traditional recruiting or certification programs, and fewer sought campus recruiting and partnerships with academic institutions.

“Ultimately, businesses should have confidence that their technology can be trusted to protect the privacy, security, and fair treatment of users, and that it aligns with their values ​​and expectations,” Ammanath said. “An effective approach to AI ethics should be based on the specific needs and values ​​of each organization, and businesses that implement a strategic ethics framework generally find that these systems support and drive innovation, rather than hinder it.”

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *