Insta_photos | Istock | Getty Images
The financial capability of artificial intelligence platforms is improving to the extent that it will likely be able to replace human financial advisors in the future, according to finance experts.
However, AI has a major drawback relative to human advisors: a lack of fiduciary duty, they said. And a resolution to that legal gray area doesn’t seem near at hand, they said.
A fiduciary duty is a legal obligation that many financial advisors — and professionals in other fields, such as lawyers and doctors — owe their clients. It essentially means they will put their clients’ best interest ahead of their own.
“The problem that we have to solve is not whether AI has enough expertise,” said Andrew Lo, a finance professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “The answer right now is, clearly, AI has the [financial] expertise.”
“What they don’t have is that fiduciary duty,” Lo said. “They don’t have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.”
An advisor who violates their fiduciary responsibility can be subject to fairly serious consequences, including regulatory penalties, civil liabilities and criminal charges, Lo said.
The notion of putting a client’s interest ahead of yours “has no teeth” without responsibility or legal liability, he said.
An ‘unresolved’ legal question

Many people seem to be turning to large language models — examples of which include OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini — for financial advice.
Two-thirds of Americans, or 66%, who have used generative AI say they have used it for financial advice, according to an Intuit Credit Karma poll published in September. The share swells to 82% for millennials and Generation Z.
About 85% of respondents who have used GenAI for financial advice acted on the recommendations provided, according to the survey, which polled 1,019 adults.
“People are looking to these services for all sorts of advice, and they’re getting it, and it seems to be a big open regulatory question,” said Sebastian Benthall, a senior research fellow at New York University School of Law’s Information Law Institute.
“Who’s really responsible, and can people really be relying on a product to do this if it’s not being backed up by a corporation with a fiduciary duty?” Benthall said. “It’s really unresolved.”
Why you shouldn’t blindly trust AI — or humans
That said, there are some good use cases for AI in financial planning, Lo said.
AI is “really good” at providing resources online for various financial concepts that typical people don’t understand, Lo said. For example, if someone were to seek answers to basic questions about Medicare, AI can generally provide a reliable overview, he said.
While AI’s output is sophisticated in many financial respects, consumers generally shouldn’t blindly trust answers to questions about their own household finances, Lo said.
“When it comes to very, very specific calculations of your own personal situation, that’s where you have to be very, very careful,” he said. “One of the things about LLMs that I find particularly concerning is that no matter what you ask it, it’ll always come back with an answer that sounds authoritative, even if it’s not.”
In this sense, double and triple checking an AI’s answers is “really necessary,” he said.
Perhaps surprisingly, AI isn’t strong at doing financial calculations, Lo said — so any numbers-based financial planning questions involving your taxes, for example, are generally best avoided.
They don’t have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.
Andrew Lo
finance professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management
James Burnham, a legal and government affairs official at Elon Musk’s xAI, said in a social media post in March that the company’s AI platform, Grok, “is not tax advice so always confirm yourself too.”
Of course, many human financial advisors provide advice to clients, and it is then up to the client to decide whether to implement it.
“I think that’s the way that I would look at LLMs: They can be very, very useful in providing different options and in describing how those options might work, but you should always remember that the advice that they can give you could be wrong,” Lo said.
“But I would argue that that’s true with human financial advisors as well,” he said.
Not all human advisors are fiduciaries
Sdi Productions | Istock | Getty Images
Not all human financial advisors are fiduciaries, either.
The landscape of financial advice is a minefield of different legal relationships. Those legal duties can differ depending on factors such as whether the person a consumer is talking to is a stockbroker, registered investment advisor, insurance agent or other intermediary.
For example, a U.S. Labor Department rule issued during the Biden administration sought to bestow a fiduciary duty on intermediaries that recommended rolling money from a 401(k) plan over to an individual retirement account, a move that can involve hundreds of thousands of dollars.
However, that rule recently died after the Trump administration stopped defending it in court — meaning many financial intermediaries aren’t beholden to a fiduciary duty regarding rollover advice. As a result, legal experts recommend consumers approach such rollover recommendations with caution, due to the potential for conflicts of interest.

Benthall, of New York University, proposed a similar legal predicament regarding AI advice: Since AI giants right now are largely U.S.-based, if an AI were to suggest that investors put their retirement savings into U.S. stocks, that advice could be viewed as self-dealing, or a financial conflict of interest.
That said, companies that provide AI services don’t appear to receive compensation for their advice to retail investors, and therefore aren’t fiduciaries, said Jiaying Jiang, an associate law professor at the University of Florida Levin College of Law who is researching AI and fiduciary duty.
Who’s really responsible, and can people really be relying on a product to do this if it’s not being backed up by a corporation with a fiduciary duty? It’s really unresolved.
Sebastian Benthall
senior research fellow at New York University School of Law’s Information Law Institute
However, financial advisors who owe a fiduciary duty to clients could violate that duty by using AI, Jiang said.
For example, if an advisor uses AI to give a certain recommendation to a client, but that recommendation isn’t in the client’s best interest, it is the advisor — and not the company backing the AI platform — that would be liable, Jiang said.
Ultimately, Lo said he thinks government policy needs to change to provide fiduciary protections for consumers who get financial advice from AI.
Until then, “we’re not going to get to the point where we can fully delegate these [financial] decisions,” Lo said.
“But I do believe that that will eventually happen,” he said.
