Knowing where an AI’s “point of no automation” is forms a vital step in integration, as it provides a guide on the limitations of a system.
“Point of no automation” refers to tasks and roles AI cannot fill due to the need of actual human intervention. This could be around making subjective decisions, understanding human motivation or quantifying personal values.
In a human-centric profession such as financial advice, knowing exactly what tasks cannot, and should not, be automated is a core part of integration.
“There is a point of no automation in advice and it lives in the grey space of human judgement, trust and values,” Complete Wealth financial adviser Dr Ben Neilson told ifa.
As it stands, the majority of AI integration in the advice workplace is centred around automating essential but repetitive tasks that are core to the advice process. This could include document generation, compliance checks and gathering client personal information.
However, many, including Neilson, have suggested that AI should be even more integrated into the process.
“I think one of the biggest areas of untapped potential for AI in financial advice is in delivering truly personalised, real-time planning for everyday clients,” he said.
“There’s a huge opportunity to use AI to provide tailored, goals-based guidance that adjusts dynamically as a client’s life evolves.”
Currently there are a few specialised AI platforms for advisers, with most relying on pre-existing ones such as ChatGPT. But that does not mean there has not been some progress, with Neilson highlighting that “AI has definitely pushed the development and progression forward by helping us tailor advice to real-time client behaviour and context, not just static profiles”.
And with further development, the profession will reach an understanding of where the point of no automation is. Current understanding of AI, however, points to one obvious avenue: AI’s inability to feel.
“No matter how advanced AI gets, it can’t fully grasp the emotional weight behind certain decisions – like when a client chooses to support a struggling family member over maximising retirement savings, or when legacy, guilt or fear play a role in financial choices,” Neilson said.
“These aren’t spreadsheet decisions; they’re human ones.”
Neilson stated the space “where logic meets life” is the key point of no automation with AI in the advising workspace, with the technology likely never being capable of quantifying the abstract and human aspects of advising.
AI’s own ability to make mistakes also means that a human adviser needs to be present to sign off on any decisions it makes.
“Just as organisations need to mitigate against model risk where adverse outcomes can occur based upon incorrect or misunderstood data, [so too can AI make these mistakes]. AI systems are prone to hallucination. It follows that human oversight required to mitigate against hallucination risk,” Investment Trends CEO Eric Blewitt said.
AI can be a tool for good in the advising workplace, helping to automate processes, enrich advice and free up time for practitioners to spend more face-to-face time with clients. However, knowing what processes are not appropriate for automation, and where human oversight is needed, is core to effective integration.
Never miss the stories that impact the industry.