Cloud 101CircleEventsBlog
Get 50% off the Cloud Infrastructure Security training bundle with code 'unlock50advantage'

5 Key Data Privacy and Compliance Trends in 2024

Published 09/13/2024

5 Key Data Privacy and Compliance Trends in 2024

Originally published by Scrut Automation.


SMBs Juggle Compliance, Competition, and Chaos

This year has already seen some monumental changes in the works pertaining to data privacy and compliance. SMBs constantly need to make tradeoff and prioritization decisions when it comes to not only these things but also competitive, technological, and other business challenges.

With this stark reality in mind, we wanted to share the top 5 issues that have come across our radar in 2024. Below, we’ll dive into them and provide some actionable recommendations for companies seeking to deliver value while staying compliant.


1. Federal Trade Commission (FTC) enforcement action regarding data anonymization

The FTC has been quite active of late. In February, the regulatory agency ordered the company Avast to pay $16.5 million in redress to its customers. At the same time, it forbade the company from selling browsing data for advertising purposes and ordered the destruction of AI models trained on improperly collected data.

The reason for this punishment? According to the FTC, Avast:

  • Collected information about consumers’ internet activity through browser extensions and antivirus software;
  • Retained it indefinitely and
  • Sold it without notice or consent to more than 100 third parties.

A vital piece of the complaint was that Avast claimed to use a special algorithm to remove identifying information before sale. However, according to the FTC, the company provided those buying its data with a single unique identifier for each web browser it monitored. Combined with location information, and timestamps, and when combined with the buyers’ own data sets, the FTC alleged that re-identifying the original users was possible.

So, what can SMBs do to avoid a similar fate?

  • Avoid collecting data that doesn’t have a business purpose. If you never have it on your servers, it cannot become a liability later.
  • Understand the difference between anonymization and pseudonymisation. The first approach breaks the link between data and the associated person permanently and irrevocably. In the latter, data teams substitute unique identifiers for personal information but can undo the transformation later.
  • Be clear about exactly which technique you are using. Avast allegedly claimed consumer data would only be transferred in anonymous form when, in fact, it was only done pseudonymously.


2. FTC warning about changing terms and conditions for AI training

The same month it fined Avast, the FTC separately warned AI-powered companies about how they were training on customer data, especially how they were communicating about these practices. Specifically, the agency cautioned against “surreptitiously” changing terms and conditions to allow more permissive information handling.

How to handle data compliance pertaining to AI training?

To show it wasn’t bluffing, the FTC cited enforcement actions against a genetics company and e-learning provider for these infractions.

How to handle data compliance pertaining to AI training?

  • Be clear about what sorts of data you are collecting, retaining, and training on.
  • Notify customers and request consent if you materially change any policy.
  • Proactively message how you plan to deploy AI to avoid public blowback.

And it’s not just consumer-facing companies that can draw lessons here. Last summer, Zoom faced a communications disaster after it did exactly what the FTC warned against. The video chat company asserted a broad right to train AI models on customer data by stealthily amending their terms and conditions. After significant backlash, however, the company retreated and made a far less bold assertion about what it was authorized to use in AI training processes.


3. Washington state “My Health, My Data” Act

At the state level, legislators have also been busy with data privacy. In Washington, the My Health My Data Act (MHMDA) came into force at the end of March. While the law regulates “consumer health data,” this encompasses far more than what is covered by the federal Health Insurance Portability and Accountability Act (HIPAA).

The act defines consumer health data as any “personal information that is linked or reasonably linkable to a consumer and that identifies a consumer’s past, present, or future physical or mental health status.”

The bill provides a non-comprehensive list of things that might fit this definition, including:

  • Location information suggests an attempt to receive health services or supplies.
  • Social, psychological, or behavioral interventions.
  • Reproductive or sexual health information.
  • Use or purchase of prescribed medication.
  • Health conditions, treatment, or diseases.
  • Gender-affirming care information.
  • Bodily functions and vital signs.
  • Biometric and genetic data.

In addition to these broad categories of regulated information, the act applies to any legal entity that “conducts business” in Washington or serves consumers there. Considering that many of the world’s largest cloud service providers operate from the state, it is conceivable that the MHMDA could have a global reach.

Entities regulated by the MHDMA have a detailed set of requirements that include


4. European Union AI Act finalization

After nearly five years of discussion, debate, and negotiation, the final text of the EU AI Act was released in April. With the approval of the EU Council the following month, the Act will go into force over the next two years.

Some key things to keep in mind will be:

  • The AI Act will have a series of risk categorizations with required controls that organizations must implement.
  • It applies quite broadly, and any organization with a nexus to the EU should pay close attention to the law’s requirements.
  • AI-powered companies operating in EU will have to address a slew of regulatory demands enforced on them.


5. Colorado mile-high AI Act

Colorado’s state legislature is also on the move. With the governor’s signature on SB 205, the Colorado Artificial Intelligence Act (nicknamed the “Mile High Act Act” by one law firm) will go into force on February 1, 2026.

SB 205 is primarily an anti-discrimination law modeled on the EU AI Act. Additionally, it has a set of interesting provisions related to two emerging standards.

  • The NIST AI Risk Management Framework
  • ISO/IEC 42001

We’ve discussed both of these in depth previously, but it’s worth noting something unique about SB 205. An acceptable defense to alleged infractions of the law is if:

  • an AI developer or deployer discovers (and fixes) a violation resulting from:
    • User feedback if the developer/deployer “encourages” it
    • Adversarial testing or red-teaming
    • Internal review
  • the organization is “otherwise in compliance with”:
    • The NIST AI RMF
    • ISO 42001
    • Other recognized risk management frameworks

The Act’s requirements clarify some best practices for SMBs looking to deploy AI-powered products:

  • Encourage user feedback for deployed AI systems
  • Have an AI red-teaming program in place
  • Consider ISO 42001 certification


Conclusion

It’s been over seven months since the start of the year, and the pace of regulatory developments has been incredible. For SMBs attempting to stay afloat regarding data privacy and/or compliance obligations, the sheer volume can be a huge challenge.

The good news is that there are tools that can facilitate compliance with the EU’s General Data Protection Regulation (GDPR) including automating evidence collection, crafting compliant policies, and optimizing workflows.

Share this content on your favorite social network today!