How can you build trust with AI?
Building trust with AI involves having transparent industry standards
Paper: Building Trust: Foundations of Security, Safety and Transparency in AI(27 Pages)
Researchers from RedHat are interested in introducing trust, safety and transparency into AI systems. The rapid advancement of AI, particularly generative AI, has led to significant market growth and adoption across various sectors. This growth has created challenges in managing AI security and safety due to the unique nature of AI systems compared to traditional software applications. Inadequate governance, self-regulation, and processes for handling flaws and hazard reports pose significant challenges in ensuring AI security and safety.
Hmm..What’s the background?
The current industry trend often prioritizes speed to market over thorough safety testing and ethical considerations. Successful self-governance examples in the tech industry, such as the adoption of HTTPS and signing technologies, provide valuable insights for the AI space.
Efforts are underway to standardize processes for AI security and safety, including initiatives by organizations like the AI Alliance, MLCommons, and the Coalition of Secure AI.
So what is proposed in the research paper?
In the research paper RedHat propose the following:
Model providers need to establish mechanisms for reporting security issues and assign CVE IDs
A central body is needed to track and manage safety hazards similar to how CVEs track security vulnerabilities
This committee would assign CFE numbers and manage the tracking and reporting of safety hazards
This format would provide a standardized way to convey information about the exposure and impact of safety hazards
There is a need for collaboration between model producers, consumers, legislative bodies, and law enforcement agencies to effectively implement these proposals and create a more responsible, ethical, and trustworthy AI ecosystem.
What’s next?
To ensure AI security and safety, implement clear reporting mechanisms, standardized model cards, and collaborative safety taxonomies. Promote standardized evaluations, centralized hazard tracking, and HEX formats. Distinguish AI security from safety, and advocate for ethical, responsible AI practices. These measures build trust and reliability in the AI ecosystem for societal benefit.
Building trust with AI involves having transparent industry standards
Learned something new? Consider sharing it!