Are We Ready for AI Legislation in Georgia?

By Bryan Cox
Research Associate, Constellations Center for Equity in Computing
Senior Research Fellow, Kapor Center

Feb. 26, 2024

Image
Bryan Cox

AI legislation has made its way to Georgia. As legislators craft policies for how citizens, consumers, employers, students, and teachers use and are impacted by AI tools, are the policies being created incorporating the ethical issues related to AI adoption? Are they mindful of the citizens most likely to be affected negatively by AI tools and agents? Do they take into account the underlying and historical issues related to existing data sets used to feed AI agents? Do we, as citizens, understand enough about AI to make informed decisions or are we guessing based on science fiction? The first two proposed policies related to AI are making their way through the Georgia House of Representatives. There are many more sure to come. Are we ready? Are our legislators ready?

Representative Madisha Thomas of South Fulton has authored and proposed two bills for the 2024 Georgia Legislative session related to decisions that AI agents have been tasked with in our local community. The first bill, HB 887, would prohibit AI agents from making certain decisions related to health care and public assistance. In its current version as of this writing, the revision states that “no decision shall be made concerning any coverage determination based solely on results derived from the use or application of artificial intelligence…”. AI results must be reviewed by a human with authority to override the AI agent’s decision. The bill also recommends that “no decision shall be made concerning the award, denial, reduction, or termination of any public assistance based solely on results derived from the use or application of artificial intelligence…”. This would be a revision to an existing bill on the topic and is being reviewed by the house committee on Technology and Infrastructure Innovation.

HB 887 is designed to guard against unintended[1]  consequences of malpractice related to denial of service. This is a threat for anyone engaging with the healthcare system or a public service agency, but would have a greater impact on populations already at risk when engaging those systems, including but not limited to people of color, people experiencing poverty, and people with mental health issues. This legislation represents a step in the right direction, a reaction to historical and potential discrimination. The human charged with incorporating the results from AI tools into these important decisions, however, is still vulnerable to relying heavily on those results and forgoing their own agency, essentially leaving the AI agent with the final say yet again. It would be useful to provide for some AI literacy and ethics training for anyone that engages with AI agents. This training would explain the dangers of unabashed reliance on AI agents and would reinforce the need for human agency in the process.

These proposed bills are a reaction to real world, personal experiences of impacted citizens. The bills are intent on preventing or providing recourse for future discrimination or unethical use of AI agents.

The second bill proposed by Representative Thomas, HB890, is designed to update any language in Georgia law that refers to discrimination of any sort and amend it to include AI agents as potential discriminatory agents. This Act is short in verbiage but extremely broad in scope. This bill is also under review by the house committee of Technology and Infrastructure Innovation as of the writing of this article.

HB 890 places the potential of AI agents alongside the potential for humans to discriminate and provides for legal recourse in such situations. This position recognizes both the agency of AI and the shortcomings of that agency. This is a tremendous first conceptual step in the legal perception of AI agents. What is not provided is the liability for the discrimination or malpractice, should it occur. Does liability go to the manufacturer of the agent or to the entity that used the AI agent in a manner that resulted in the unethical decisions? This is similar to the question of liability for damage or death attributed to a self-driving car. These are questions we must work through as we incorporate an increasing number of AI agents in all sectors of society.

These proposed bills are a reaction to real world, personal experiences of impacted citizens. The bills are intent on preventing or providing recourse for future discrimination or unethical use of AI agents. The experience of the impacted citizens is an important source of information, but we also need to include historical context, technical understanding, and societal perspective as we make decisions about the AI tools that our society employs.

The spirit of the proposed legislation in Georgia, in many ways, mirrors that found in an executive order released by the Biden-Harris administration. The executive order calls for safety and security standards in the creation and deployment of AI agents. It also calls for investigative research to ensure AI agents do not violate the privacy, dignity, and civil rights of human citizens. There is particular emphasis on the ethical use of AI agents in government and a call for research on and consideration of the impact of AI on education and the workforce.

There are many threats to consider related to AI agents and their implementations: privacy issues, workforce impact, discrimination, data security, and data accuracy to name a few. Many of these issues are addressed in the current presidential administration’s executive order on AI, but some are not. In particular, there is little said about the importance of developing an AI literate populace, an omission that leaves decisions about AI implementation for the many in the hands of the few. This is a recurring theme that historically has opened the door for intimate oppression of various populations. An ignorant citizenry is far easier to oppress or manipulate than an informed one. There is also no direct consideration for the inequities or discriminatory practices in the systems that the AI agents will be relying on for their initial data sets. Without addressing these, the AI agents will inherit similarly harmful practices and perpetuate historically oppressive scenarios.

As this executive order is actualized in research, AI agent development, and policy, it is up to the citizenry to evaluate if our best interests are being upheld. If we wait until the agents are employed, and many already are, it will be a difficult task to correct their impacts. The proposed policies in Georgia are the first opportunity for us to reflect on how we want AI use to be governed in our community. We are all responsible for increasing our awareness and understanding of AI agents and having candid, informed conversations with our friends, family, co-workers, employers, employees, teachers, and students about the great opportunities and potential threats that come with an AI revolution.