The Future of AI Regulation

If artificial intelligence (AI) is to serve humans in ways that improve our lives or avert the existential threat that many people believe may exist, AI needs to operate under a regulatory framework with a defined set of guardrails. Indeed, business leaders are looking to Washington to lead on creating such an AI regulatory framework.

To view the full article please register below:

    First Name (required)

    Last Name (required)

    Your Email (required)

    The Future of AI Regulation

    The Future of AI Regulation

    If artificial intelligence (AI) is to serve humans in ways that improve our lives or avert the existential threat that many people believe may exist, AI needs to operate under a regulatory framework with a defined set of guardrails. Indeed, business leaders are looking to Washington to lead on creating such an AI regulatory framework.

    Challenges of AI Regulation

    There are many challenges to regulating AI. The most basic challenge is that the technology is moving much faster than the legislation or the regulators, which could mean that we remain in a constant state of outdated and irrelevant laws and regulations.

    An additional challenge is whether the expertise exists in our legislative bodies or the regulators to craft effective legislation and properly enforce it.

    Finally, there is the challenge of what to regulate. AI is so multifaceted that one-size-fits-all regulation won’t work. For example, the AI resident in video games very likely shouldn’t be regulated to the same degree as the AI used in a hospital to treat patients.

    The Current Landscape of AI Regulation

    AI regulation is not a concern solely of the U.S. The European Union (EU) and China are also considering AI regulations whose effects may be global in nature. Here’s a quick overview:

    The U.S.

    There is no comprehensive policy to regulate AI, though Congress has begun taking steps to address the absence of a policy. The Algorithmic Accountability Act would require companies to assess the impact of their AI systems for bias, while the Advancing American AI Act would set federal standards for the use of AI. Some U.S. states are developing laws to govern AI.

    The EU

    The EU’s AI Act is in the final stages of likely passage. It will be an extensive, top-down series of prescriptive rules, including prohibiting AI use where it poses an unacceptable risk. It includes four tiers of risk that will be subject to different regulations.

    China

    China recently declared that AI algorithms must be reviewed in advance by the state and adhere to socialist values.

    Components of Regulatory Framework

    Self-regulation is not an option given our disastrous experience with self-regulated online social media platforms. As lawmakers develop future AI regulation, expect that it will revolve around a number of key elements.

    1. Transparency, fairness, explainability, security and trust are overarching goals
    2. Risk-based, with a focus on regulating the higher risk uses of AI
    3. Mitigating risk and eliminating bad actors
    4. Pre-approval of innovation
    5. Data regulation, privacy and security
    6. A mix of “carrots” and “sticks” to affect desired behavior
    7. Whether to apply regulation to AI software and/or its infrastructure (e.g., license AI data centers)

    The focus on AI has increased exponentially over the years as the impact on economic, political, moral and personal levels continues to grow.  While no sector or individual seems able to escape the technological wave of AI, education, safety and a diligent approach are needed.

     

    Please reference disclosures at: https://blog.americanportfolios.com/disclosures/

    Contributor

     

    Subscribe

      Subscribe to receive a monthly recap of our three most popular posts.

      Recent Videos

      Loading...

      AP Awards 2021