New York Lawmakers Tackle AI Accountability with Legislative Package Aimed at Big Tech Regulation

Lawmakers in New York are gearing up for the end of the legislative session in the first half of June, with a focused examination looming over the potential implications and risks posed by artificial intelligence (AI). A legislative package, consisting of four critical bills, has been earmarked for discussion, with much of the legislative groundwork oriented to keep Big Tech in check. Both lawmakers and legal advocates met in the Capitol recently to debate the necessity and significance of these proposed bills.

Considering the accelerated pace of technological advancements, there is a growing concern over invasive surveillance capabilities. For instance, biometric surveillance technology could allow companies to monitor employees’ bathroom frequency, or even eavesdrop on break room conversations for potential unionization talks. This has prompted Assemblyman Steven Otis, D-Rye, to stress the need for legislators to catch up to technology and prevent any harmful interferences, while still encouraging beneficial tech applications.

The Empire State AI Accountability campaign has been initiated in response to this, headed by Nina Loshkajian, a staff attorney for the Surveillance Technology Oversight Project, or STOP. Plans for this campaign include engaging labor, union, and civil rights groups to help bolster AI legislation.

The packages catered for in the campaign largely mirror the guidelines put out by Gov. Kathy Hochul earlier this year. Hochul’s recommendations share some similarities with federal regulations set out in President Joe Biden’s executive order last year. Furthermore, the administration is also supporting the University of Buffalo with a decade-long investment worth $275 million to establish an AI computing center.

The proposed legislative package includes:

  1. The Bossware and Oppressive Technologies Act to regulate tech surveillance and automated decision-making tools employment.
  2. A bill to pioneer the role of a chief artificial intelligence officer to regulate AI policy and an advisory committee to keep an inventory of existing AI uses and foresee regulatory needs.
  3. A ban on law enforcement agencies’ use of biometric surveillance tech, with provision of a task force to iterate guidelines for potential allowed usage.
  4. A bill requiring prospective and open disclosures regarding the use of automated decision-making systems, along with strict sanctions against unauthorized application by committed state agencies.

In light of the above, the rampant deployment of AI across businesses, primarily within Fortune 500 companies, exposes an urgent need for comprehensive regulation, especially considering that many AI tools are yet to be tested for inherent biases. Hoylman-Sigal suggests concern that these AI tools have been discriminatory against marginalized communities, including people of color, women, and the LGBTQ+ community. Meanwhile, Assemblymember Steven Otis, D-Rye, posits that AI is effectively a “tidal wave” that requires the serious consideration of protections against AI-instigated biases and possibly faulty decision-making.