Introduction
This report is the final output of the Australian government’s review of the legislation and regulation of safe and responsible artificial intelligence (AI) in the healthcare sector. It examines Australia’s existing legislative frameworks and gathers stakeholder opinions through public consultation to assess the impact of AI applications in medicine on current laws and regulations, exploring the potential role of non-regulatory measures. The focus of the report is to identify gaps and deficiencies in existing legislation and to propose recommendations to promote a safer and more responsible application of AI in the healthcare field.
Overall Strategy of the Australian Government
The report begins by outlining the Australian government’s overall strategy in the field of AI, aiming to gain the benefits of AI while building public trust. The government has taken five action steps: providing regulatory clarity; supporting and promoting best practice governance; facilitating AI capability building; positioning the government as a role model; and initiating international cooperation. Budget allocations for 2024-25 are aimed at prioritizing reviews of consumer, intellectual property, and healthcare laws to understand their applicability to AI. The Department of Health has engaged in public consultations aimed at clarifying and strengthening the legislation and regulation of AI within the Australian healthcare environment.
Key Findings
The report’s core consists of seven major findings:
Finding 1: Current Regulatory Landscape
The regulation of healthcare in Australia is a complex network involving state/territory and national legislation. Existing national frameworks (such as privacy laws, consumer laws, therapeutic goods laws, and regulations concerning healthcare professionals) affect the use of AI in healthcare. The legislation reviewed by the Department of Health is mostly management-oriented, underpinning core healthcare systems such as aged care, Medicare benefits, pharmaceutical benefits, My Health Record, and medical identifiers. Some minor technical amendments may be needed to enhance clarity. For healthcare products not covered by existing regulations, all-encompassing AI safeguards (including pre-market and post-market requirements) can provide clarity to build trust and promote innovation.
Finding 2: National AI Healthcare Policy Leadership
Public consultations identified a need for national and central policy guidance to facilitate equitable benefits from AI within Australia’s healthcare sector. There is widespread support for AI guidelines addressing the complexities of healthcare and extending existing recommendations from the National AI Centre. This may include initiatives regarding the ethical, secure, and responsible use of sensitive medical data in AI technologies; unique governance risks associated with the adoption and implementation of AI in healthcare services; evidential understanding of AI’s impacts on specific demographics; timely guidance on emerging products with wide-ranging impacts to keep pace with rapid AI innovation; and the influence of AI on various medical specialties.
Finding 3: Resources and Support for Safe Implementation
A knowledge gap exists surrounding the safe and responsible implementation of AI. There is a lack of high-quality, up-to-date guidelines that support evidence-based implementation of AI in healthcare, as well as considerations for human-machine interactions in healthcare settings. Guidance on evaluating and validating new AI technologies throughout the product life cycle could include: assessments of the suitability of the AI usage environment; quality and relevance evaluations of datasets used for training and validation; monitoring output accuracy; support for product selection that meets the needs of specific populations and healthcare services; support for AI implementation in human-machine teams; and trial support for the implementation of AI in clinical environments.
Finding 4: Need for High-Quality, Trusted Information for Healthcare Professionals and Consumers
A centralized source of high-quality and trusted information would support consumers and clinicians in making informed decisions regarding AI products. The presence of low-quality and misleading information about AI in healthcare can adversely affect decision-making. Furthermore, using AI to generate information about healthcare may result in poor-quality outputs. Access to trustworthy, accurate, reliable, and timely informational sources about AI in healthcare is essential to support its safe and responsible use.
Finding 5: Realizing Benefits
Currently, there is insufficient evidence to support the potential benefits of AI in healthcare. Establishing foundational elements, such as a benefits framework with various qualitative and quantitative indicators, would offer insights into the benefits AI can bring to healthcare. This would enable stakeholders to identify the most effective applications of AI within healthcare systems and to support investment decisions and equitable distribution of AI benefits.
Finding 6: Strengthening Data and Informed Consent Management
Data and informed consent risks need to be well managed throughout the AI lifecycle. Regulations should clearly outline when and how AI accesses and uses data, and who is responsible for the responsible use of patient data. It is essential to clarify who owns patient data and to strengthen patient informed consent practices surrounding AI data use. Governance frameworks need to address risks associated with data bias, data accuracy, data embedding, and data access. Some responses indicated that implementing synthetic data resources could address certain challenges related to data access and representation in healthcare research. All-encompassing mandatory safeguards including data governance measures and supply chain transparency can help mitigate risks associated with the use of AI in high-risk environments.
Finding 7: Incentives to Support Best Practices
A framework of incentives to promote industry provision of high-quality AI technologies tailored to the Australian market can encourage best practices. Incentives for the development of best practice AI and the provision of practices characterized by high quality, accuracy, safety, and applicability can help ensure benefits for all Australians. This also helps reduce the risk of harm from low-quality products.
Key Themes from Public Consultation
The report also discusses several key themes that emerged from public consultations, including levels of understanding of AI, bias, consent and transparency, data, evidence base, and human-machine interactions. The consultation results reveal varying levels of understanding of AI among stakeholders, with many expressing a lack of understanding or reliance on media information. Bias, consent, and data security are primary concerns, with many respondents emphasizing the need for better representation of Australian demographics in AI product development and ensuring informed consent from patients regarding the use of AI. Nearly all respondents believe that human involvement in decision-making processes (“human in the loop”) is essential in high-risk medical scenarios that involve patient safety.
Conclusion
The report concludes that Australia’s existing legislative framework is largely adaptable to the application of AI; however, some technical and definitional amendments are needed to enhance clarity. Additionally, comprehensive AI safeguards as well as non-regulatory measures can further bolster protections to ensure a safer and more responsible application of AI within the healthcare sector, while equitably realizing the benefits of AI. The report emphasizes the importance of establishing a strong evidence base, providing high-quality trustworthy information, and incentivizing best practices in the industry.
Reference