TECHNOLOGY

A new chapter in cybersecurity & AI regulation

Consistent with its broader approach to policymaking, the Trump administration favors deregulation in the realm of cybersecurity and AI. Businesses must nevertheless contend with a variety of cyber risks and comply with a range of state-level and international regulations.

KEY CYBER RISKS INCLUDE:

01


Privacy, including wrongful collection and failure to obtain consent for data collection practices.

Business email compromise schemes and denial-of-service attacks.

02


03


Copyright and intellectual property infringement, which is emerging as a more significant risk as AI tools continue to develop.

Ransomware and other threats, which are similarly becoming more complicated as threat actors increasingly leverage AI to advance attack methodologies.

04


AI-related securities suits filed in 2025

Cyber and technology exposures can morph into management liability claims depending on how companies handle and/or disclose incidents. Under a rule finalized by the SEC in 2023, public companies listed on U.S. securities exchanges are required to disclose — within four business days — any material cybersecurity incidents, such as data breaches and ransomware attacks. Failure to disclose such events — along with stock drops and other losses that may result from these events — could lead to costly securities litigation.

AI, meanwhile, is becoming a growing area of focus in securities litigation. From 2021 through 2023, shareholders filed an annual average of seven lawsuits with allegations related to AI — for example, alleging that companies and senior leaders misrepresented their AI capabilities or failed to disclose AI-related risks — according to Cornerstone Research. Shareholders filed 15 such suits in 2024 alone, and 16 in 2025.

Outside of the U.S., the European Union and other jurisdictions are beginning to issue strict regulations on AI in the context of data privacy, presenting new compliance obligations for both public and private companies. Conversely, the U.S. federal government has a less regulated approach to AI and is encouraging the development of large data centers to accommodate AI demand and the export of U.S. AI technology.

Many states have enacted their own laws regarding the use of AI. A few states have passed comprehensive cross-sector AI regulations, while many others have passed more targeted legislation focused on avoiding bias or promoting transparency and accountability.

In December 2025, however, President Trump issued an executive order aimed at limiting the ability of states to enforce these laws. Beyond AI, a patchwork of state laws related to privacy — along with the EU’s General Data Protection Regulation — and other cybersecurity-related matters remains in place.

It is imperative that senior executives and compliance professionals at both public and private companies stay up to date with AI developments and data privacy regulations. Organizations should put protocols in place to prevent cybersecurity events and respond to AI-related risks. Boards and senior management teams should remain aware of AI issues in the same way as cybersecurity exposures and should respond accordingly.

© 2026 Lockton Companies. All rights reserved.