Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

Proposed Principles for Artificial Intelligence Published by the White House

Proposed Principles for Artificial Intelligence Published by the White House

Blog Article Published: 02/19/2020

By Francoise Gilbert, Data & Privacy Expert, DataMinding.com

This blog originally appeared on Francoise Gilbert's blog here, read more updates around privacy by going to here website DataMinding.com.

A draft memorandum outlining a proposed Guidance on Regulation of Artificial Intelligence Application(“Memorandum“) for agencies to follow when regulating and taking non-regulatory actions affecting artificial intelligence was published by the White House on January 7, 2020. The proposed document addresses the objective identified in an Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, (“Executive Order 13859”) published by the White House in February 2019.2

The memorandum sets out policy considerations that should guide oversight of artificial intelligence (AI) applications developed and deployed outside the Federal government. It is intended to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence and consider ways to reduce barriers to the development and adoption of AI technologies.

Principles for the Stewardship of AI Applications

The memorandum sets forth ten proposed principles:

  • Ensure public trust in AI
  • Public participation in all stages of the rulemaking process
  • Scientific integrity and information quality
  • Consistent application of risk assessment and management
  • Maximizing benefits and evaluating risks and costs of not implementing
  • Flexibility to adapt to rapid changes
  • Ensure Fairness and non-discrimination in outcomes
  • Disclosure and transparency to ensure public trust
  • Promote safety and security
  • Interagency cooperation

Details on each of these principles are provided below

  • Public Trust in AI.

Government regulatory and non-regulatory approaches to AI should promote reliable, robust and trustworthy AI applications that contribute to public trust in AI.

  • Public Participation.

Agencies should provide opportunities for the public to provide information and participate in all stages of the rulemaking process. To the extent practicable, agencies should inform the public and promote awareness and widespread availability of standards, as well as the creation of other informative documents.

  • Scientific Integrity and Information Quality.

Agencies should hold to a high standard of quality, transparency and compliance information that is likely to have a substantial influence on important public policy or private sector decisions governing the use of AI. They should develop regulatory approaches to AI in a manner that informs policy decisions and fosters public trust in AI. Suggested best practices would include: (a) transparently articulating the strengths, weaknesses, intended optimizations or outcomes; (b) bias mitigation; and (c) appropriate uses of the results of AI application.

  • Risk Assessment and Management.

The fourth principle cautions against an unduly conservative approach to risk management. It recommends the use of a risk-based approach to determine which risks are acceptable, and which risks present the possibility of unacceptable harm, or harm whose expected costs are greater than expected benefits. It also recommends that agencies be transparent about their evaluation of risks.

  • Benefits and Costs.

The fifth principle provides that agencies should consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of an AI application. Agencies should also consider critical dependencies when evaluating AI costs and benefits because data quality, changes in human processes, and other technological factors associated with AI implementation may alter the nature and magnitude of risks.

  • Flexibility.

When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications. Agencies should also keep in mind international uses of AI.

  • Fairness and Non-Discrimination.

Agencies should consider whether AI applications produce discriminatory outcomes as compared to existing processes, recognizing that AI has the potential of reducing present-day discrimination caused by human subjectivity.

  • Disclosure and Transparency.

The eighth principle comments that transparency and disclosure may increase public trust and confidence. These disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how an application impacts human end-users. Further, agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency.

  • Safety and Security.

Agencies are encouraged to promote the development of AI systems that are safe, secure, and operate as intended, and to encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Particular attention should be paid to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems. Further, agencies should give additional consideration to methods for guaranteeing systemic resilience, and preventing bad actors from exploiting AI system weaknesses, cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology.

  • Interagency Cooperation.

Agencies should coordinate with each other to ensure consistency and predictability of AI-related policies that advance innovation and growth in AI, while appropriately protecting privacy, civil liberties, and allowing for sector- and application-specific approaches when appropriate.

Non-Regulatory Approaches to AI

The memorandum recommends that an agency consider taking no action or considering non-regulatory approaches when it determines, after evaluating a particular AI application, that existing regulations are sufficient, or the benefits of a new regulation do not justify its costs. Examples of such non-regulatory approaches include: (a) sector-specific policy guidance or frameworks; (b) pilot programs and experiments; and (c) the development of voluntary consensus standards

Reducing Barriers to the Development and Use of AI

The memorandum points out that Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, instructs OMB to identify means to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security. The memorandum provides examples of actions that agencies can take, outside the rulemaking process, to create an environment that facilitates the use and acceptance of AI. One of the examples is agency participation in the development and use of voluntary consensus standards and conformity assessment activities.

Next Steps

The memorandum points out that Executive Order 13859 requires that implementing agencies review their authorities relevant to AI applications and submit plans to OMB on achieving the goals outlined in the memorandum within 180 days of the issuance of the final version of the memorandum. In this respect, such an agency plan will have to:

  • Identify any statutory authorities specifically governing agency regulation of AI applications;
  • Identify collections of AI-related information from regulated entities;
  • Describe any statutory restrictions on the collection or sharing of information, such as confidential business information, personally identifiable information, protected health information, law enforcement information, and classified or other national security information);
  • Report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications; and
  • List and describe any planned or considered regulatory actions on AI.

Conclusion

This draft guidance marks defines a concrete structure for outlining regulatory and non-regulatory approaches regarding AI. Businesses should evaluate the extent to which their own AI strategies have the ability to address the ten principles.

In addition, since the development of AI strategies is likely to have global consequences, they should also take into account similar initiatives that have been developed elsewhere around the world, such as by the OECD (with the “OECD Recommendation on Artificial Intelligence”), the European Commission (through its “Ethics Guidelines for Trustworthy Artificial Intelligence”) or at the country level, for example in France (with the “Algorithm and Artificial Intelligence: CNIL Report on Ethics Issues”).


About the Author

Françoise Gilbert has extensive, in depth experience with data privacy and security issues, Internet, eBusiness, and information technology law. Her clients include numerous Fortune 500 and other global corporations, as well as selected emerging technology start-ups. She advises companies on how to strategically manage their privacy, security, electronic workplace, and e-business risks, develop and implement information privacy and security strategies and compliance programs, and integrate privacy and security in mergers & acquisitions, outsourcing, cloud computing, marketing, and other relations.

Françoise regularly addresses a wide range of privacy and security issues, such as those faced by regulated entities, Internet businesses, mobile applications or those related to crossborder personal data transfers, security breach disclosure laws, implementation of GLBA or HIPAA Security Safeguards, or foreign data protection laws (Western Europe, North America, or Asia Pacific) and cross border data flow issues. You can follow her blog here or learn more on her website https://www.dataminding.com/.


Share this content on your favorite social network today!