AI Regulations Gain Focus: Expert Insights Navigate Safe and Stable AI Usage Amid Potential Disruptions
Qlik Aims High in India: Aiming for 1,000 Clients in 2023! (By Leah Sirama - 05/18/2025)
In an evolving landscape of artificial intelligence (AI), tech experts are advocating for comprehensive yet balanced regulations to ensure the safe and stable implementation of AI. A combination of federal initiatives, pending legislation, and state laws is shaping the current policy debates in the United States.
Federal Developments
Recent legislative proposals suggest a ten-year moratorium on state-level AI regulations, aimed at preventing the fragmentation of state laws and fostering national standardization. This approach is designed to bolster American competitiveness and innovation. Additionally, $500 million has been earmarked to modernize federal IT systems using AI and automation technologies for enhanced efficiency and cybersecurity.
Executive orders and non-binding frameworks, such as the "Blueprint for an AI Bill of Rights" and an Executive Order on Safe, Secure, and Trustworthy AI, have been issued with the goal of promoting responsible AI development and safety testing. However, a subsequent Executive Order under President Trump reversed many of these policies, making safety, transparency, and audit practices optional.
Recent Federal Laws
The TAKE IT DOWN Act, enacted in April 2025, criminalizes the dissemination of AI-generated intimate imagery without consent and requires removal from public platforms, mitigating deepfake concerns despite raising First Amendment issues related to potential overreach.
State-Level Activity
Nearly 700 AI-related bills have been considered in the past year, resulting in 113 enacted laws. Comprehensive AI legislation has been enacted in Colorado and Utah, and California's AB3030 mandates disclaimers on AI-generated healthcare communications and instructions for human intervention. The California CPPA Regulations are also being revised to accommodate AI implications in data protection.
Expert Perspectives and Industry Considerations
Balancing innovation and safety is a critical concern for tech experts, who fear excessive regulation might hinder progress while insufficient regulation could enable misuse or disruptions. Targeted regulation in high-risk sectors like healthcare and financial services is being suggested as a more effective option for ensuring regulation is both impactful and minimally burdensome. Enterprise clients, insurers, and institutional investors may also expect companies to abide by best practices like model cards, bias audits, and incident reporting.
A coherent national strategy or targeted, sector-specific regulations are being championed as the optimal path forward to strike the right balance between fostering innovation and managing risks in the current AI policy environment.
In the realm of AI regulations, tech experts suggest a focus on targeted regulations in high-risk sectors like finance, to promote safe AI usage without hindering progress. This approach is expected to address concerns from enterprise clients, insurers, and institutional investors, who may demand adherence to best practices such as model cards, bias audits, and incident reporting.
Furthermore, the continued evolution of AI technology in business sectors necessitates a balance between fostering innovation and managing risks, to ensure a stable and competitive market environment for the United States.