Powered by MOMENTUM MEDIA
  • subs-bellGet the latest news! Subscribe to the ifa bulletin

What are the insurance implications for advisers implementing robo-advice?

oscar

Oscar MartinisLet's start at the beginning. The purpose behind insurance is to protect the consumer from an error in the delivery of the professional service. So what happens if the advice is delivered by an algorithm?

Insurers will always ask three key questions. What can go wrong? Who is responsible? How bad can it get? Let’s look at each of these three questions. For the purposes of this blog, we will assume that the adviser adopts an off-the-shelf solution rather than building their own.

What can go wrong?

1. Due to the nature of robo-advice, where the 'client' inputs their own data, we may see inaccurate inputs where garbage goes in and garbage comes out.

2. The algorithm is incorrect and that leads to ‘poor advice’.

3. There is an IT glitch (some random/unforeseen matter) that creates ‘bad advice’. Even the largest software providers in the world have these 'glitches', so it is possible that a robo-advice solution may experience glitches of their own.

On the flip side humans are prone to error and may inadvertently fail to replicate a compliance process (IFA Pi claims will attest to this). Humans have proved time and time again that we just can’t stick to the script and often fail to deliver every aspect of the advice. This is where a smart algorithm can actually reduce risk. The algorithm never fails to ask the questions and always delivers a consistent experience. There is no chance of 'he said, she said' in a dispute. Robo-advice programs also come with the assurance that mandatory disclosures are complete and always given at the right time – a distinct advantage over human advisers.

==
==

Who is responsible?

We must consider where the responsibility lies when a person is dissatisfied with the advice from the algorithm that has resulted in an unacceptable outcome. Who do they sue? The IFA or the robo-advice platform? Ultimately the IFA has earned a fee, 'carries risk' and remains responsible for all decisions, actions and potential harm of advice carried out in association with their firm. ASIC has affirmed that at the end of the day, the same laws and obligations for giving advice apply to digital advice. They see the legislation is technology-neutral in the obligations it imposes.

Even when the client has incorrectly input data, Section 961B of the Corporations Act imposes a duty on the adviser to ensure that where it was reasonably apparent that information relating to the client's relevant circumstances was incomplete or inaccurate, made reasonable inquiries to obtain complete and accurate information. While a human adviser would likely detect a fat finger input error, an algorithm may not.

In the event of a claim, the questions will be:

  • Who designed the algorithm?
  • Who designed the questions?
  • Who assumes the risk?

What is critical for advisers adopting an off-the-shelf solution is the contacts they enter into. We often see hold harmless clauses or limited liability clauses inserted by institutions. If you sign a contract that holds a third party harmless or limits their liability, you are prejudicing your insurer’s rights to recovery and could find your insurer may not be willing to pay any claim due to their rights being prejudiced. IFA’s must be very vigilant and ensure before they sign onto anything that they understand exactly where they stand with their insurer with regards to subrogation of rights. This applies not just to robo-advice but to all agreements they enter into.

How bad can it get?

It could get very bad. The very nature of robo-advice is that it can handle high volumes of lower-value clients. What happens if an error is not detected for months and multiple clients are affected? Let’s assume your robo-platform is a raging success and over a 12-month period 500 new clients sign up; however, unbeknown to you there is an error that results in poor advice that only comes to your attention 12 months after launching. Of the 500 new clients, 100 are affected. A systemic issue with a robo-solution can quickly spread to multiple clients. How does your insurance respond and, more importantly, how is your deductible structured? Most PI policies' deductible are on an each and every claim basis. That means for each of these 100 claims you will have a deductible. If you have a typical $15,000 deductible for each and every claim then you will be liable for the first $1,500,000. This scenario is not only possibly fatal for your business but it will also leave you in breach of RG 126, as ASIC considers whether a licensee has significant cash flow to meet the excess for a reasonable estimate of claims as a key determinant as to whether a Pi insurance policy is adequate.

Robo-advice clearly has a place in the Australian financial services landscape, but what’s important is that licensees get the right advice around all the implications of adopting this new technology. Every new advancement in business has its challenges and every challenge has its solution.


Oscar_Martinis_200x200.jpgOscar Martinis, senior partner at McDougall Kelly & Martinis