Artificial Intelligence (A.I.) is being heralded by some as the future in technological advancement. But, as is often the case, innovation is followed by misuse and controversy, which makes it ripe for government regulation.
LEARN MORE: 3 wearable technology products that show Arizona approach to innovation works
What is artificial intelligence exactly? According to the Federal Trade Commission (FTC), A.I. “often refers to a variety of technological tools and techniques that use computations to perform tasks such as predictions, decisions, or recommendations.” A good example of A.I. technology in society are the self-driving cars that can be seen on the roadways across the country. Self-driving cars use A.I. technology to process the environment they are driving in and to make decisions, such as recognizing a pedestrian in a crosswalk and making the determination of whether to come to a stop or continue driving, all without the driver moving a muscle.
While A.I. has the ability to revolutionize many businesses and industries, Uncle Ben said it best to Peter Parker when he warned the future Spider-Man, “with great power comes great responsibility.” However, the Biden Administration and FTC have recently declared that they are prepared to act as the watchdogs of the A.I. industry, keeping in line those who fail to responsibly handle the great power that comes with A.I. technology. Companies that fail to stay vigilant in their use of A.I. are likely to see themselves the subject of investigation by the FTC, Consumer Protection Bureau, and White House, as well as find themselves vulnerable to litigation.
Of particular interest to the FTC are unfair or deceptive practices that A.I. technology may employ. The FTC has stated that Section 5 of the Federal Trade Commission Act (15 USC 45), which prohibits unfair or deceptive acts or practices that affect commerce, gives them the authority to regulate A.I. technology and companies. In fact, the FTC has already launched an investigation against one giant in the A.I. community, OpenAI, the maker of the infamous ChatGPT software, over whether it has violated consumer protection laws. ChatGPT was released on November 30, 2022, and quickly became the talk of the town when it could take bits of data and text and turn out anything from an in-depth research paper to a medical diagnosis. Of course, as time would tell, the accuracy of what was produced by ChatGPT was far from reliable.
Another way the government is prepared to investigate and regulate A.I. is through the sphere of civil rights protection, with FTC Lina Khan stating in April that, “there is no A.I. exemption to the laws on the books” regarding existing civil rights legislation. The Biden Administration has publicly announced that they are prepared to take action against companies using A.I. technology that is found to be discriminatory. In February of 2023, President Biden issued an executive order which required agencies to “prevent and remedy discrimination, including by protecting the public from algorithmic discrimination.” Sectors such as lending, housing, and employment are likely to see the most government oversight in regard to civil rights violations and their A.I. technology. As of this year, the DOJ has already reached a settlement with Meta, the company formerly known as Facebook, Inc., that requires the company to change its ad delivery practices, which utilize A.I. technology, in order to prevent discriminatory ads for housing. As of January of this year, the FTC and the Consumer Financial Protection Bureau issued a joint request for public comment regarding how tenant screening algorithms may have an adverse and discriminatory impact on underserved communities.
There will also inevitably be an increase in A.I. litigation hitting the courts in the future, with legal professionals predicting a class-action lawsuit boom. There are already several on-going intellectual property (IP) class-actions where the courts will have to determine whether A.I. created work infringes on existing copyrights. See, e.g. Andersen, et al. v. Stability AI Ltd., et al. 3:23-cv-00201, (N.D. Cal.)(Jan. 13, 2023). A.I. litigation regarding data privacy, copyright infringement, negligence, and fraud are likely to play out in courts around the country in the future.
While there may be a lot of unknowns for the future of A.I. technology one thing that is certain, those who attempt to traverse the landscape of A.I. technology, federal regulation, and the law without a knowledgeable A.I. attorney as a guide do so at their own peril.
Author: Erin Przybylinski is an attorney with Jennings Haug Keleher McLeod’s Phoenix office. Her practice focuses on representing clients in litigation matters, including civil litigation, civil rights and constitutional law violation claims. She can be contacted at ecp@jhkmlaw.com.