Understanding Oregon's New AI Guidelines and What Businesses Need to Know
Oregon is taking a proactive approach to regulating artificial intelligence (“AI”) with the recent release of guidelines from Attorney General Ellen Rosenblum. These guidelines are designed to help businesses navigate the growing legal landscape of AI technology. While the state's new regulations primarily target political campaigns, their reach is likely to impact a wide range of industries that are leveraging AI, particularly in areas like consumer privacy, marketing, and data protection.
The guidance issued by the Oregon Attorney General highlights the serious risks it poses, particularly in terms of privacy violations, discrimination, and accountability. For businesses operating in Oregon, these risks must be managed carefully to avoid legal exposure. Here’s a breakdown of how Oregon's existing laws intersect with AI use and what steps companies can take to stay compliant.
The Oregon Unlawful Trade Practices Act (“UTPA”) is designed to protect consumers from deceptive business practices, including misleading information or unfair transactions. The Attorney General’s guidance made it clear that businesses using AI are subject to this law, particularly in cases involving false advertising or deceptive claims. For example, if AI-generated content is used to misrepresent product characteristics or to create false endorsements, such as deepfakes featuring celebrities, companies could face legal consequences. Additionally, using AI to create artificial urgency (like false flash sales), to manipulate prices during emergencies, to take advantage of consumer ignorance, or push individuals into transactions without clear benefits could also violate the UTPA.
The Oregon Consumer Privacy Act (“OCPA”) governs how businesses collect, store, and use consumer data. Companies that use AI to analyze consumer data or train machine learning models must be transparent about their data practices. This means providing clear privacy notices explaining how personal data will be used, including for AI training purposes.
There are specific requirements under the OCPA that businesses must follow, including obtaining consumer consent to use their data, allowing individuals to access and correct their data, and giving them the right to delete it or opt-out of certain uses, like profiling. For businesses using sensitive data, like health or financial information, explicit consent is required before such data can be used in AI models. Companies must also ensure that they get fresh consent for any secondary use of previously collected data.
The Oregon Consumer Information Protection Act (“OCIPA”) adds another layer of responsibility for businesses deploying AI. In the event of a data breach, whether it involves AI systems or not, companies are required to notify affected consumers and the state attorney general. This underscores the importance of strong data security practices, especially for AI-driven systems that rely heavily on personal information.
AI systems are built by humans, and unfortunately, they can inherit biases that lead to discriminatory outcomes. For example, an AI system used for credit or loan approvals could inadvertently disadvantage certain racial or gender groups if its training data contains biases. The Oregon Equality Act (“OEA”) prohibits discrimination based on identity characteristics such as race, religion, gender, and more, which means businesses must be vigilant when developing and deploying AI systems. The Attorney General's guidance encourages companies to identify and address potential biases early in the development process to prevent discriminatory results.
Compliance Recommendations for Businesses.
Oregon's guidance on AI use serves as a reminder for businesses to stay ahead of emerging legal requirements. Companies deploying AI in any capacity should take proactive steps to ensure compliance with Oregon’s consumer protection and anti-discrimination laws. Key actions include:
-
Reviewing and updating privacy policies to ensure they align with OCPA requirements, including obtaining explicit consent for the use of personal data in AI systems.
-
Conducting regular data protection assessments to ensure AI systems do not violate consumer privacy or produce discriminatory outcomes.
-
Being transparent with consumers about AI-driven decisions, especially in marketing, product recommendations, or financial services.
-
Strengthening security measures to prevent breaches, and ensuring that businesses have clear processes for breach notification when personal data is involved.
Adhering to these guidelines enables businesses to mitigate their legal risks while harnessing the advantages of AI technology. Maintaining awareness and compliance will not only assist in steering clear of legal challenges but also foster consumer confidence in the application of AI.
© 2025 Cliclaw.com
(Image Credit: iStock Photo)
This article is for information purposes only. It is not intended to be and should not be relied on as legal advice for any particular matter.