U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

30-Apr-24

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats.


“These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems,” the Department of Homeland Security (DHS) said Monday.


In addition, the agency said it’s working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals’ privacy, civil rights, and civil liberties.


The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks.


“Critical infrastructure owners and operators should account for their own sector-specific and context-specific use of AI when assessing AI risks and selecting appropriate mitigations,” the agency said.


“Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”


The development arrives weeks after the Five Eyes (FVEY) intelligence alliance comprising Australia, Canada, New Zealand, the U.K., and the U.S. released a cybersecurity information sheet noting the careful setup and configuration required for deploying AI systems.


Read More…