As artificial intelligence systems become more prevalent, researchers are increasingly uncovering harmful outputs such as hate speech, copyright violations, and inappropriate content. Experts argue that a lack of robust regulations and comprehensive testing is compounding these problems. Techniques like red teaming and open, third-party evaluation are recommended to more effectively identify and mitigate the risks of AI models. Initiatives such as Singapore’s Project Moonshot are working to standardize AI evaluation, but industry adoption and regulation still lag behind sectors like pharmaceuticals and aviation. Researchers propose that AI models must meet strict approval criteria and be more narrowly tailored to specific tasks to minimize misuse and ensure greater oversight and safety.































