According to Holley Nethercote lawyer Tali Borowick, the lessons from 2025 paint a picture of stricter compliance obligations moving forward and more accountability for “algorithmic decision-making”.
“Whether you describe it as the ‘AI revolution’, ‘AI bubble’, or an ‘industry reshape’, 2025 has seen artificial intelligence affect industries across Australia, including financial services,” Borowick said.
“Specifically, we have seen questions of best use, appropriate reliance on AI-driven models, evolving risk management frameworks, data breaches, scams and increased regulatory scrutiny.”
Outlining a range of developments throughout the year – from ASIC floating the use of AI in assessing misconduct reports to the release of the National AI Plan to end the year – she noted that there is no expectation that “sweeping AI-specific legislation” will be introduced in the immediate future.
However, there are still a wide range of obligations under existing laws that “intersect with licensees’ usage of AI”.
“Indeed, the challenges raised by the use of AI are complex and inherently linked with other considerations, such as data governance, cybersecurity, privacy and ethics practices,” Borowick said.
She added: “ASIC’s enforcement actions against licensees offer valuable insights into the regulator’s expectations when it comes to protecting your firm (and your clients) from data breaches.
“For example, ASIC has pursued financial service providers for cybersecurity failures. In March 2025, the regulator brought an action against FIIG, alleging the licensee failed to take appropriate steps to protect itself and clients from cyber security risks over a four-year period.
“These failures enabled a hacker to enter the FIIG network, resulting in the theft of confidential information from 18,000 clients. Additionally, ASIC alleges that FIIG did not investigate or respond to the hack until a week after it was notified of its occurrence.”
This was far from the only enforcement action from the regulator has taken, including in relation to scams, however the crux of the issue for financial services professionals is how to balance the use of the new tech with their duties to clients.
“Many financial services businesses, and their staff, are adopting AI and taking advantage of the many benefits that AI can produce,” Borowick said.
“However, how can licensees balance the benefits of AI with the potential increased risk of scams or data breaches?”
According to Holley Nethercote, the key is ensuring firms have comprehensive policies and procedures in place, preparing and training of staff, and strengthening fraud detection and monitoring.
“As 2025 draws to a close, one trend stands out above all others: the regulatory landscape for artificial intelligence in financial services is changing. In 2026, we can expect a wave of new and strengthened legal frameworks aimed at governing how AI is deployed, monitored, and controlled across the sector,” Borowick said.
“From stricter compliance obligations under emerging global standards to enhanced accountability for algorithmic decision-making, regulators will demand greater transparency, fairness and risk management.”
Firms need to move beyond experimentation, she added, and into a phase of “robust governance, embedding ethical AI principles, implementing rigorous audit trails and ensuring data integrity at every stage”.
“Those who act now to align with these evolving requirements will not only mitigate risk but also gain a competitive edge in an increasingly regulated environment. The message is clear: 2026 will not just be another year of innovation, it will be the year of accountability.”



