With rapidly advancing AI technology giving advisers the ability to try new things at the drop of a hat, the question now is, what will you do with this power?
Recent years have seen AI technology evolve at an unprecedented rate, resulting in many facets of life now being infiltrated by it in some way or another. This has been particularly true for advice practices, with technology providers promising that their new AI tool will revolutionise how they work with varying degrees of success.
However, with so much power at their fingertips, industry academic and consultant Dr Katherine Hunt suggested that arguably, one the biggest challenges when it comes to using AI is how to do so in a way that is ethical and protects clients’ rights and information.
Taking some learnings from Greek mythology, Hunt used the example of King Midas who wished he could turn everything to gold. While his wish was granted, he would come to regret it.
“Imagine, I just want the AI tech stack that’s going to make our lives better. Fantastic. Now, what are you going to do with that power, and how is it going to impact the people around you? Because, for King Midas, when he went to hug his daughter, she turned to gold, which was not what he wanted,” Hunt said.
The lesson from this story is generally that what may seem like a blessing could actually be a curse and lead to significant consequences. And like the ability to turn things to gold, Hunt warned that believing AI will fix all your problems could end up being detrimental to advisers.
Although there is clearly value in utilising the technology available to make advice processes more efficient, Hunt reiterated the need to consider the potential risks and consequences when introducing new technologies into the practice.
“We really need to make sure that we are including that lens of, this is great, let’s make sure we’re doing this right. We need to make sure that we’re keeping that focus on the stakeholders at all times,” she said.
While research and due diligence should be standard practice for every business before they bring in a new piece of technology, Hunt recommended that advisers should also be conducting a privacy impact assessment any time a new AI system or piece of technology is introduced that will handle client information.
Another crucial element of the AI discussion is consent, with advisers required by law to get clients’ explicit consent when using AI technology within the business.
“The Privacy Act and the APPs – the Australian Privacy Principles – require us to make sure that there’s consent and transparency around our AI use, which is kind of complicated, isn’t it, because we know that as soon as we tell our clients we’re using AI, they are going to know that AI means overseas servers, and that’s worrying, especially right now, with our global situation, which is unlikely to change in the next 50 years, there’s always something going on,” Hunt said.
“And then there is, of course, the reputational challenge, isn’t there? Making sure that it’s very clear how we’re using it, why we’re using it, and how we’re protecting our clients’ privacy first and foremost.”
Another key factor when using AI is ensuring that whatever is being produced by the technology is being given human oversight to make sure that the outcomes of it are equitable, unbiased and true.
As explained by Hunt, ethics are a socially constructed concept so AI is generally not confined to what is considered equitable behaviour.
As such, maintaining human checkpoints within the system is crucial because if something goes wrong, the adviser or the business will be held accountable for their output, even if it was AI generated.
“The buck stops with you, the financial adviser, on everything. That’s the system right now. So, we really need to make sure we’re rolling this out properly,” Hunt said.
“Making sure that there’s logs that we’re downloading and adding to our own audit trail when we’re using these tools, so that we implement oversight regardless of whether the system has it or not.”
However, Hunt was adamant that all of this shouldn’t mean that AI needs to be avoided altogether, with many advisers experiencing positive outcomes from using the technology.
Referencing Nvidia’s global State of AI in Financial Services report released in February, Hunt noted that around seven in 10 (68 per cent) advice practices globally experienced a revenue growth of at least 5 per cent as a result of using AI.
In fact, the top of this scale saw 23 per cent of practices increase their revenue by more than 20 per cent.
While increasing revenue is important, Hunt noted that being able to reduce business costs is also crucial, particularly in Australia where operating costs is a regular topic of discussion in the advice profession.
According to Nvidia, 12 per cent of global respondents reported a reduction in annual costs of more than 20 per cent, while 17 per cent reported a drop of between 10 and 20 per cent and a third (35 per cent) saw a decrease of 5 to 10 per cent.
Hunt added: “Your competitors are using it right now to increase their revenue and to decrease their costs. So in case you ever needed a little fire under you, that was the fire.”
To secure your ticket to the Adviser Innovation Summit in June, click here.
Never miss the stories that impact the industry.