Texas has recently found itself at the center of a contentious debate over artificial intelligence regulation. A recently introduced bill, House Bill 1709—also known as the Texas Responsible AI Governance Act (TRAIGA)—aims to set the strictest state-level restrictions on AI utilization in the United States. Filed by Texas State Representative Giovanni Capriglione on December 23, 2023, the legislation adopts a risk-based framework for AI regulation that mirrors the European Union’s AI Act. This act uses a classification system based on perceived risk levels, imposing heavier regulations on systems deemed as higher risk. However, this approach has been criticized as overly broad and potentially stifling to technological innovation and economic growth within the state.
Critics argue that TRAIGA, akin to many risk-based frameworks, fails to focus on actual societal harms, choosing instead to regulate speculative future uses of AI. The bill, for instance, prohibits the use of AI for social scoring but does not address the same activity conducted through non-AI methods. This incongruence suggests that the legislation penalizes the use of AI technology itself rather than aiming to mitigate harmful activities.
A significant concern is the sweeping range of industries and applications that would come under stringent regulation through TRAIGA. By broadly defining what constitutes a “high-risk” AI system, the bill leaves substantial room for interpretation, potentially classifying any AI system fundamental to a “consequential decision” as high-risk. Without independent definitions for critical terms like “substantial factor” and “consequential decision,” the legislation risks being mired in ambiguity. This lack of clarity could result in excessive bureaucratic discretion and significant compliance costs for businesses operating in Texas.
The implications for developers, deployers, and distributors of high-risk AI systems are notable. The bill mandates exhaustive risk assessments, documentation of training data, and withdrawal of non-compliant AI programs, which could place a considerable burden on companies. This is especially challenging for startups and smaller businesses, which may find it difficult to navigate the regulatory complexities while adhering to these stringent requirements.
Currently, only Colorado has enacted