Broadcast United

Thomson Reuters’ Future of the Professional report is cautiously optimistic about AI in the legal sector

Broadcast United News Desk
Thomson Reuters’ Future of the Professional report is cautiously optimistic about AI in the legal sector

[ad_1]

Today, it is generally believed that generative AI can complete simple tasks but struggles with difficult tasks. So to what extent can generative AI save time or improve work efficiency?

Thomson Reuters, a professional services and technology company with interests in law, tax, compliance and more, explores how professionals can use artificial intelligence in their business The Future of Professionals Report 2024We spoke exclusively to David Wong, Chief Product Officer at Thomson Reuters, on the occasion of the report’s release about generative AI in the workplace.

Thomson Reuters surveyed 2,205 legal, tax, risk and compliance professionals worldwide. The report did not specify Generative Artificial Intelligence When asked about artificial intelligence, the capabilities discussed in the report are usually related to generative AI. In the conversation with Huang, we used AI As a general term for generative models that can draw images or text.

The percentage of professionals who believe AI will be “transformative” increased by 10%

The report is optimistic about AI, predicting that it will save a lot of time. 77% of respondents said they believe AI will have a “significant or transformative impact” on their jobs in the next five years; this number has increased by 10% from last year’s report.

“I’m a little surprised by the uptick in strategic relevance because last year when ChatGPT and GPT-4 came out, you would have thought the hype cycle would have peaked and people would have been so excited,” Wong said.

However, interest in the strategic implications of AI extends beyond most law firms to nearly all industries served by Thomson Reuters. Therefore, Huang said the higher numbers may reflect broader interest across industries rather than growing interest among law firms.

The divide between those who are very cautious about AI and those who are very ambitious about AI

Wong noted that there is an interesting divide between those who are cautious and those who are ambitious when it comes to generating AI. In the report, Thomson Reuters asked a question: “In one, three, and five years, do you think approximately what percentage of the work your team currently does will be done by humans or AI?” The survey provided four possible answers—a range of AI-led or human-led work—to determine whether professionals were cautious or ambitious about using AI technology for their work. They found that 13% of professionals fell into the “cautious” category, believing that even in five years, only a small portion of their work would be done by AI assistants. At the other extreme was the “ambitious” category, with 19% of professionals predicting that AI will do most of their work in five years.

“A lot of professionals have realized the practical implications and reality of these technologies,” Huang said. “Based on the trials over the past 12 months or so, we are now starting to see these professionals turn trials into implementation.”

What tasks can’t AI accomplish?

The expectation for generative AI is 2023 will be very high but will probably drop again and then level offAccording to Gartner.

For legal professionals and other jobs covered in the Thomson Reuters report, “AI solutions are very good at any type of task where, frankly, you can provide a pretty good set of instructions,” Wong said.

As one report respondent put it, such tasks include researching, summarizing documents, or “working on high-level concepts that do not require specific legal references.”

What AI cannot do is make decisions on its own. AI companies hope it will eventually be able to do that; in fact, taking autonomous actions on behalf of the user is level 3 out of 5. OpenAI’s new AI capabilities rankingBut AI has not yet achieved that goal, Huang noted, and for Thomson Reuters’ industry, the issue is as much about the technology’s capabilities as it is about people’s trust in it.

See also: Modern Enterprise Data Organization It takes the right human team members to thrive.

“I think AI has not yet reached the point where it can make decisions on its own, at least in terms of trust,” Huang said.

Mr Huang said that in many cases, AI “doesn’t perform as well as human reviewers on all but the simplest things”.

The report said 83% of legal professionals, 43% of risk, fraud and compliance professionals, and 35% of tax professionals believed that “using AI to provide advice or strategic recommendations” was “ethically … too much of a stretch.”

The majority of respondents (95% of legal and tax respondents) believe it is “a step too far” to “allow AI to represent clients in court or make final decisions on complex legal, tax, risk, fraud and compliance issues.”

“If you ask ‘What is the likelihood that you think AI will make the right decision or make decisions as good as humans?’ I think the answer might actually be different than ‘Is this ethical?’” Huang said.

Will everyone have an AI assistant in five years?

Despite these misgivings, Thomson Reuters boldly declared in its report: “Within five years, all professionals will have an AI assistant.” They predict that the assistant will work just like a human team member and perform complex tasks.

Huang noted that part of the optimism comes from sheer numbers: The number of companies offering AI products has surged over the past two years, including the biggest smartphone makers.

“Almost everyone with an iPhone 15 and above and iOS 18 will have an AI system in their pocket,” Wong said. “I believe in a few years, on every new version and every Apple device, you will have access to this assistant. Microsoft is also actively rolling out Copilot. I think in a few years, it will be hard to have a version of Microsoft 365 without Copilot.”

SEE: Learn all about Microsoft Copilot via TechRepublic Cheat Sheet.

In addition to potentially using AI to create, analyze, or summarize content, organizations are also considering how AI can be used to transform their products or production processes. According to the report, the majority of executive respondents believe AI will have the most significant impact on their operational strategy (59%) and product/service strategy (53%).

“I think this is something that almost every company is looking at right now, which is that there’s a lot of routine, rote work that goes into running a business that you can describe in a manual,” Huang said.

These rote tasks are well suited to AI, and in the legal field, he said, AI could transform the way companies file regulatory or statutory documents.

What Responsible “Professional-Grade” AI Looks Like

What are the respondents’ opinions on Responsible use of artificial intelligence Many people believe that data security is a key part of responsible use of AI. Others believe that:

  • Data security during prompt or query steps.
  • It is mandatory to have the output reviewed by professionals.
  • Carefully consider the tasks for which AI technology can be used.
  • Transparency about the origin of response data.

“If someone says[generative AI]is perfect and doesn’t hallucinate or make mistakes, then they are either fooling themselves or the claim should be subject to strict scrutiny,” Huang said. “But what you want is transparency into performance.”

Responsible AI systems used in professional fields should be based on verified content, be measurable, and be able to cite their references, he said. They should be built with safety, reliability, and confidentiality in mind.

ChatGPT is “the worst example of a generative AI solution for professionals because it doesn’t meet these needs,” Wong said. “But you can actually design a ChatGPT that is privacy-safe, respects confidentiality, and doesn’t train on data. These are design choices of the system. They are not inherent to AI.”

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *