While ChatGPT garnered attention in the realm of artificial intelligence, the technology has silently infiltrated various aspects of daily life, including the screening of job resumes, rental applications for apartments, and even influencing medical decisions in some instances.
Despite the prevalence of AI systems exhibiting discriminatory tendencies, favoring specific races, genders, or income brackets, there is a notable absence of substantial government oversight in this domain.
In response to this regulatory gap at the federal level, legislators in seven states are embarking on significant legislative efforts to address bias in artificial intelligence. These initiatives mark the initial stages of a protracted dialogue spanning decades on how to balance the advantages of this enigmatic technology with its well-documented perils.
Suresh Venkatasubramanian, a professor involved in crafting the White House’s Blueprint for an AI Bill of Rights, emphasized the pervasive impact of AI on individuals’ lives, underscoring that despite its ubiquity, many AI systems are far from flawless.
The success or failure of these regulatory endeavors hinges on lawmakers navigating intricate challenges while engaging with an industry valued at hundreds of billions of dollars, evolving at a pace akin to lightyears.
Notably, out of nearly 200 AI-related bills presented in state legislatures last year, only a fraction were enacted into law, as reported by BSA The Software Alliance. This year, over 400 AI-related bills are under consideration, primarily focusing on regulating specific facets of AI technology. For instance, there is a significant emphasis on addressing deepfakes, with proposals targeting issues like pornographic deepfakes proliferating on social media platforms. Additionally, efforts are underway to constrain chatbots such as ChatGPT to prevent the dissemination of harmful instructions, like bomb-making guidelines.
Distinct from these endeavors are the legislative actions in seven states aimed at combating AI discrimination across various sectors, acknowledging it as one of the technology’s most intricate challenges. These bills are currently being deliberated from state to state, including Connecticut.
Experts studying AI’s propensity for bias assert that states are lagging in establishing necessary safeguards. The widespread use of AI-driven “automated decision tools” in crucial determinations, such as hiring processes, remains largely concealed from public scrutiny.
Research indicates that a significant percentage of employers, including a vast majority of Fortune 500 companies, rely on algorithms for hiring decisions. However, the general populace is largely unaware of the prevalence of these tools, let alone the potential biases ingrained within them.
The inherent bias in AI systems stems from the historical data they are trained on, often reflecting past discriminatory practices. For instance, an AI-driven hiring algorithm, developed almost a decade ago, favored male applicants due to the gender imbalance in the historical data it learned from, inadvertently disadvantaging female candidates.
The lack of transparency and accountability in AI-driven decision-making processes is a focal point of the current legislative efforts, following the lead of California’s previous unsuccessful attempt to regulate AI bias in the private sector.
Under these proposed bills, companies utilizing automated decision tools would be mandated to conduct “impact assessments,” detailing the AI’s role in decisions, data collection methods, discrimination risk analysis, and safeguards in place. Depending on the specific bill, these assessments would be submitted to state authorities or made available upon request.
Moreover, some bills would require companies to notify customers about the use of AI in decision-making processes, allowing them to opt out under certain conditions.
While the legislative landscape is evolving, challenges persist in enacting robust regulations. The industry advocates for measures like impact assessments to enhance transparency and consumer trust in AI technologies. However, the path to legislation is fraught with obstacles, as evidenced by the stalled bills in Washington and California.
Despite the hurdles, states like California, Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont are forging ahead with new or revised proposals to address AI bias. These legislative endeavors mark a crucial step towards grappling with the complexities of AI technology and its enduring presence in society.