An Australian business reporter recounts engaging with what appeared to be an AI-operated company spokesperson that denied being a bot, spotlighting the growing risks of undisclosed automated communications. The episode, alongside cases such as a retailer’s chatbot dispensing illegal electrical advice and a Canadian airline’s failed attempt to disown its bot’s misstatements, underscores the liability and trust challenges posed by “AI hallucinations.” With Canberra leaning on existing laws rather than a new AI act in the near term, academics warn the window to mandate disclosure and accountability is closing. Public confidence remains fragile, particularly in Australia, where skepticism runs high; without clear rules on when and how AI is used in decisions and customer interactions, retrofitting transparency later could prove costly—and confidence even harder to restore.
Related articles:
NIST AI Risk Management Framework
The European Approach to Artificial Intelligence
ICO Guidance on AI and Data Protection
OECD Principles on Artificial Intelligence





























