Following enforcement cases and compliance with the Texas Data Broker Law, the Texas Data Privacy and Security Act, Texas Deceptive Trade Practices Act (DTPA), and applicable federal standards.
Subscribe
Popular Topics
Recently, the California Attorney General released a pair of advisories regarding the applicability of California laws to artificial intelligence (AI). In short, the California Attorney General is reminding us that, while California has recently passed legislation to specifically address AI, there are many laws already on the books that can be applied to AI use cases. Further, one of the two advisories is specifically focused on the use of AI in the healthcare sector, signaling that California will likely start its AI-specific enforcement efforts on the healthcare sector – a move which we have already seen with Texas.[1]
AI in Healthcare
Turning to the content of the advisories, the first advisory discusses AI in the healthcare industry, recognizing the unique sensitivity of healthcare applications such as medical diagnostics and treatment decisions, appointment scheduling and resource allocation, risk assessment and claims processing, and patient data management.
The advisory explicitly states that it is unlawful to use AI to deny insurance claims in a manner that overrides professional medical judgment. This aligns with existing California laws that prioritize physicians' determinations over automated systems. Healthcare entities must ensure that AI systems serve as decision support tools rather than final arbiters, physicians maintain meaningful oversight of AI-assisted diagnoses, and insurance determinations preserve the doctor-patient relationship.
Further, California's anti-discrimination laws extend fully to AI applications and, as such, AI systems must not perpetuate bias against protected classes, create disparate impacts in resource allocation, use training data that reflects historical inequities, or result in denial of care based on protected characteristics. As an additional note, the advisory reminds us that using a third-party system is not an excuse for an AI system that fails to comply with these rules.
In addition to medical privacy laws, AI systems should be implemented using comprehensive testing protocols that include pre-deployment validation of AI systems, ongoing monitoring for drift or emerging biases, regular audits of outcomes across demographic groups, and documentation of testing methodology and results.
AI Across All Sectors
The second advisory extends many of these principles across all sectors while addressing specific concerns relating to unfair competition, false advertising, anticompetitive activity, civil rights, and election misinformation. Moreover, many of the provided examples focus on using AI systems to aid fraudulent behavior, which is viewed as a clear violation of existing laws. The general advisory further highlights the risk that users of AI may unintentionally violate one or more of these laws and thus additional diligence must be undertaken to review outputs of AI systems and the effects of the outputs of AI systems. Additionally, the advisory emphasizes that all existing consumer protection frameworks apply fully to AI-enabled services and products, ensuring that technological innovation does not undermine established safeguards.
The general advisory also reinforces comprehensive data privacy compliance obligations for AI systems operating in California. All such systems must comply with both the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). This compliance includes honoring consumer requests for data access and deletion, providing clear notice of data collection practices, respecting consumers' opt-out rights for certain types of data uses, and implementing reasonable security measures to protect sensitive information from unauthorized access or breaches. These rights are generally more difficult to implement in AI systems, and so additional care must be applied to AI systems.
Recent legislative developments in California have introduced a bevy of additional AI-specific obligations that organizations must navigate. Generally, these rules apply to the following types of activities: AI development, marketing, use of name, image, and likeness, political campaigns, explicit content development, and providing of healthcare.
Conclusion
Entities operating in California should view these advisories as an early warning system - indicating that enforcement actions against non-compliant AI systems are likely to increase. While the two advisories are intended to address AI, it is important to keep in mind that the risks and enforcement avenues discussed in the advisories can be applied to data usage more broadly. By implementing comprehensive compliance programs that address the specific concerns highlighted in these advisories, organizations can mitigate legal risk while contributing to the responsible development of AI technologies.
[1] https://www.chamberlainlaw.com/Data-Privacy-Tracker/texas-privacy-crackdown-what-businesses-need-to-know-about-ag-enforcement-trends