AB 1018: California’s Upcoming AI Regulation and What it Means for Companies
Published 09/05/2025
Introduction
As artificial intelligence (AI) becomes entrenched within every part of the modern business process, it increasingly has the power to shape companies, as well as the humans it touches. AI may even decide who gains access to economic opportunities and who does not.
Today, nearly 70% of employers use Generative AI (GenAI) during the hiring process prior to human intervention, with 66% using it to write job descriptions, 61% to screen resumes, and 52% to find candidates. The American Medical Association found that 2 in 3 doctors are using Health AI. Meanwhile, in other sectors such as finance, education, housing, and more, similar AI technologies are being used to make decisions about eligibility and prioritization for critical and necessary resources. While GenAI is making these processes far more efficient for professionals, these technologies also frequently recreate existing social biases, embedding discrimination deep within these systems.
Due to the critical nature of AI and its powerful effect on humankind, companies and governments are now working to regulate AI and its use. These regulations address the potential harms of GenAI and ensure responsible deployment of AI technologies. One such prominent regulation, California’s AB 1018 bill, is currently passing through the California State Legislature. The bill aims to create comprehensive safeguards surrounding the use of automated decision systems (ADS) in various workforce sectors from employment, housing, healthcare, credit, and education. This blog is focused on the implications and effects of AB 1018 for companies, outlining how it can reshape operations, responsibilities, and the use of AI tools for multiple departments.
What is AB 1018?
First introduced in February of 2025, California’s Assembly Bill, AB 1018, also known as the Automated Decision Systems (ADS) Accountability Act, is legislation that works to control how companies can develop and use an ADS to influence decisions about people. AI systems, machine learning models, and scoring algorithms are all considered an ADS. The bill is currently in the process of being passed and is preparing to be heard by the Second Chamber and Committee for review. Automated decision systems are being deployed in vastly different contexts, ranging from housing, healthcare, education, finance, and mores. AB 1018 seeks to maintain transparency and fairness in these systems to ensure that they do not reinforce automated discrimination.
Under AB 1018, any company using or developing automated AI systems to make decisions would be required to obey a series of laws that include:
- Conducting regular assessments to evaluate the accuracy and bias of the AI system
- Maintaining proper documentation of the system and its design
- Providing notice to people when an ADS is being used to make decisions and offering human review as an appeal to the ADS’s decision
- Undergoing third-party audits if the system is used to make decisions that affect at least 6,000 Californians over a three-year period
- Being transparent about the system’s purpose and risks to the public and regulators alike
The bill applies to any company that operates within California, whether it is head-quartered there or has a branch, any entity or company within the state of California will be required to comply with AB 1018. Crucially, this bill isn’t just about public agencies, but also applies to private companies, tech vendors, and any Californian organization that uses automated decision systems.
How does AB 1018 differ from previous laws?
Yet, while AB 1018 is a historical and landmark piece of legislation in the modern world of artificial intelligence, it is not California’s first attempt to regulate AI. Prior to AB 1018, lawmakers introduced California’s AB 2930 in 2024. AB 2930 was a bill aiming to address algorithmic discrimination through transparency and regular bias assessments, however, the bill failed to pass through the Second Chamber of the California State Assembly. AB 2930’s failure can be attributed to a few flaws that particularly affected the everyday processes of business leaders and those who advocated for privacy laws. To start, AB 2930 was criticized for being overly vague, lacking clear definition for what constituted an ADS, and raising concerns that basic tools and low-risk data being processed would fall under the same category as a high-risk data AI system. The bill also failed to distinguish the difference between the developers and deployers of these AI systems, which consequently punished companies for the performance of ADS tools they had no control over. Ultimately, this created a compliance burden and heavy costs, especially for smaller businesses that used AI solutions that were developed by external vendors. Additionally, a narrower worker-focused AI bill called California SB-7 "Employment: Automated Decision Systems" is also moving through the legislature. This bill complements AB 1018 and focuses on employment decisions on a wider scale other than hiring.
Responding to earlier setbacks to similar legislation however, the new AB 1018 bill has gained broader support as it advances through the California Assembly in 2025. One of the most significant improvements of the bill is that it narrows its focus to a specific decision called a “consequential decision.” These decisions are defined as decisions that can affect basic opportunities and necessities for individuals and change their access to things like employment, utilities, and legal rights. This new focus area effectively applies to situations where automated decisions could only seriously affect someone’s life, and does not apply to low risk AI tools with minimal impact on people.
Another prominent difference is AB 1018’s introduction of a dual-responsibility model. The model explicitly separates the obligations of developers versus deployers. This distinction is particularly important for businesses that rely on third-party vendors for their ADS tools, especially since developers are the ones who are required to conduct bias-risk assessments under AB 1018. Deployers, in turn, have to ensure that they use these tools responsibly and provide various options and alternatives for decisions made by an ADS.
What teams are affected by AB 1018?
AB 1018 will have far reaching implications across multiple sectors and departments in any company that develops or deploys automated decision systems. This legislation not only touches HR and hiring, but also engineering, data science, product design, and more. With AB 1018, each of these teams will need a new perspective on how they engage with AI tools and to ensure that AI tools align with the new law’s standards for fairness and accountability.
Among the most directly affected will be hiring and HR teams. While many companies already use AI-driven tools to screen resumes and evaluate candidates, under AB 1018, HR departments are required to notify applicants and document when an ADS is being used, as well as offer them alternative options to appeal the decision. Moreover, third-party platforms often used for hiring must also be reviewed for compliance with the law, which ultimately puts more pressure on HR teams to understand the outcomes and processes of the technology.
Engineering and AI development teams are also required to carry the responsibility of ensuring that systems are safe and accurate. While developers conduct formal impact assessments and monitor for biased outcomes, these teams must also work closely with legal and compliance staff to ensure that the models can be audited and explained for their outcomes when issues are found.
Legal departments will be responsible for interpreting the law in unique contexts and defining a "consequential decision” within their entity’s protocols. Legal teams will ensure that contracts with AI developers include auditing, notifications when AI is in use, and adequate sharing of data, and will also need to be prepared for external audits and investigations if ADS decisions are challenged.
Product managers and data science teams are also part of the vital process to maintain ADS transparency, especially in companies that are building AI tools. Data science teams will be required to address potential disparate outcomes and monitor models that are in deployment to observe the ADS outcomes. Along with data science teams, IT teams must also make sure that logs and audit data are accessible and maintained.
All of these teams play a supporting, yet essential role in enabling other departments to meet AB 1018 compliance obligations, from user controls to data governance.
What sectors are affected by AB 1018?
In addition to impacting various departments within companies, California’s AB 1018 also reaches across multiple different sectors that rely on automated systems to make vital decisions. Aside from employment, in the housing sector, tenant screening algorithms that evaluate rental eligibility are affected. Meanwhile, financial institutions that use AI for credit scoring and loan approvals will also need to comply with the bill. In healthcare, ADS can be used for treatment recommendations or insurance decisions. Educational institutions may use these algorithms for things like admissions processes and student placement. Throughout every sector, organizations will need to guarantee that their AI tools are fair, explainable, and include human oversight with frequent audits.
Collectively, AB 1018 represents a shift in how companies will approach AI. Rather than taking it as a neutral and technical tool, companies must now view it through the lens of a system with real-life legal, social, and ethical responsibilities. Proactive companies that begin aligning with these regulations right now will not only reduce their legal risk, but also build a strong foundation between their users and stakeholders.
A Holistic Approach for AB 1018
AB 1018 presents both opportunities and challenges for companies that are working within California. In a positive context, the bill encourages a more transparent use of AI and ADS, which can help strengthen business and public confidence to avoid reputational damage. As public awareness about algorithmic discrimination grows, companies that adopt AI systems and are aligned with AB 1018 will be better positioned to gain consumer trust and differentiate themselves from their competition. By complying early, companies can also prepare for future AI and federal laws, giving them a head start on risk management. By requiring impact assessments, human appeals, and third-party audits, AB 1018 incentivizes better monitoring of AI tools, eventually leading to more accurate and reliable systems, and increasing transparency with the Californian public.
On the other hand, the bill introduces new responsibilities that may be hard to comply with, while adding to legal and monetary risks as well. Smaller companies and startups may lack the resources to conduct thorough impact assessments or afford regular audits and checks, especially if they rely on external third-party AI developers. Small companies will now need to document and monitor the use of each AI system, which could add operational and compliance costs, risking innovation in fast-paced environments. Companies that deploy these automated tools may also face a higher risk of litigation with the possibility of discriminatory outcomes, and must build a human appeal system, which could be time and cost-intensive.
For companies that act as AI vendors and developers, there may be legal concerns about liability and the need to provide documentation and bias testing for the ADS systems. Moreover, business relationships may need to evolve to accommodate the data-sharing and cooperation that the law demands between developers and deployers. Contracts will also likely require new clauses to cover the system’s liability and compliance obligations.
Despite these costs and disadvantages, it is important to note that California’s AB 1018 offers long-term benefits by pushing companies to build more ethical, transparent, and robust AI systems. Through early adoption of these laws, Californian companies can save on future legal and monetary risks and receive long-term benefits.
Conclusion
As artificial intelligence becomes more embedded in everyday decision-making, the need for responsible oversight has been growing in importance. Though it may be mid-October before we know the fate of California’s AB 1018, the law reflects a new era for automated systems. We are now holding these systems to the same standards of fairness and transparency as humans. For companies, this legislation signals a vital shift in how AI can be used to emphasize not only a digital age for technology, but also ethics, civil rights, and human safeguards. While AB 1018 introduces new compliance responsibilities, it also offers a powerful framework that can help businesses build more trustworthy technologies. Companies that begin aligning now will not only reduce any legal risks, but lead the way in a rapidly evolving landscape of AI governance.
Acknowledgements
Thank you to Neil Cohen, CMO at Portal26; Krystal Jackson, AI Standards Development Researcher at CLTC; Gauri Manglik, VP of Privacy and AI Governance at GoFundMe; and Frances Mosley, Partner at DLA Piper for inspiring me to write this and their feedback.
About the Author
Ishani is a high school junior at Leland High School in California, and a product marketing intern at Portal26 supporting AI governance initiatives. She is also the Co-Founder of Justice For All, a youth-led nonprofit that empowers students nationwide to engage in the justice system by observing courtroom proceedings and advocating for legal transparency. She has extensive research experience, serving as both a Research Fellow at Ballotpedia to further nonpartisan election analysis, and has interned in the past with the California State Senate, where she conducted policy research and community engagement on various bills being passed. Blending her interests at the intersection of business, technology, and civic engagement, Ishani is passionate about building strategic solutions that address real-world challenges. She aspires to shape the future of responsible innovation where AI and civic systems can work together to create more transparent institutions, and is excited about AI’s impact on innovation in business.
Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
The Oversight That Could Cost You: Why Basic Hypervisor Protection Fails
Published: 09/04/2025
A Look at the New AI Control Frameworks from NIST and CSA
Published: 09/03/2025