What the EU Artificial Intelligence Act Means for Your AI Strategy (original) (raw)

The European Union Artificial Intelligence Act (the Act) took effect on August 1, starting a 24-month countdown until full enforcement begins for most of the regulations. The Act is one of the first binding, comprehensive AI legislations, and its impact will be felt far beyond Europe.

Multinational businesses running any AI workloads in the EU will need to comply. Also, the Act will likely serve as inspiration for future AI regulations in other jurisdictions, just as the GDPR did for data privacy. While there could be greater fragmentation in the global AI regulatory framework as countries try to move AI regulations forward quickly and lean toward a more insular, protectionist approach for AI, the Act could become the de facto global standard for regulating AI risk.

Now, model providers and consumers alike are thinking about how to adjust their strategies to succeed with AI while staying compliant. No organization is fully prepared to comply yet, and there’s much work to be done in order to change that. Let’s look at a few of the complexities introduced by the Act, and how you can address them.

Federated AI helps balance data needs against governance requirements

The Act sets new data governance requirements, stating that datasets used to train models should be relevant, representative, error-free, and complete according to the intended purpose.[1] To meet this requirement, businesses must ensure better control and visibility over AI data throughout its lifecycle and build an inventory of the AI datasets. This includes being able to track the full lineage of any new data they consume from partners or service providers.

The challenge is that AI models are only as good as the data they’re trained on. General-purpose AI models must be trained on expansive datasets to ensure they can perform effectively across different settings. At the very time when businesses need to expand their AI datasets, regulations are making it more difficult to do so.

AI data and model marketplaces could be key to overcoming this expansive dataset challenge while remaining compliant. A federated learning model allows collaborators to share their data inside a neutral exchange location. This helps marketplace participants expand their datasets through data sharing, without having to sacrifice control over their own data.

As part of a private AI strategy, federated learning can help businesses track data lineage and protect against sensitive data leaks like the one Samsung experienced in 2023.[2] In addition to the Act, private AI can also enable businesses to maintain suitable data inventory and security controls that can help comply with data privacy and sovereignty regulations like the GDPR.

AI models depend on robust digital infrastructure

Another aspect of managing risk in AI use cases is ensuring that the underlying models are available when they’re needed. The Act recognizes this need, stating that “The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.”[3]

Geo-redundancy will play an important role in ensuring AI resiliency, and organizations can choose a mix of cloud and on-premises backup solutions to help them achieve it. Working with a digital infrastructure provider that offers a global footprint of AI-ready data centers and a dense ecosystem of cloud providers can be helpful in this regard.

A vendor-neutral partner like Equinix can also give businesses the flexibility to switch AI platforms quickly to avoid downtime—or even just keep up with their changing needs around AI. As organizations mature in their AI strategies, they’ll likely change tech stacks every few years. They’ll do this to ensure they have the latest and greatest hardware, and to achieve better performance and resiliency from each round of updates.

Finally, to ensure robust AI workloads, you must account for growing density requirements. The growth of generative AI is forcing businesses to change the way they think about data centers. To do AI right, you need your inference workloads hosted in the right locations—ones that offer the best mix of power density and low latency. You also need to look for data center providers that are implementing innovative technologies, such as liquid cooling, to support high-density workloads. And you’ll want to work with a provider that’s invested heavily in scaling renewable energy coverage, such as signing power purchase agreements (PPAs).

Maintaining control and visibility over your AI data and models

To trust the outcomes you get from your AI models, you need to know that neither the models nor the data fed into them have been impacted by unauthorized access or tampering. The Act calls out the importance of cybersecurity when it comes to high-risk AI use cases, saying: “The technical solutions to address AI specific vulnerabilities shall include…measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning).”[4]

Federated AI can help organizations get the data they need to fuel their models without introducing unacceptable security risks. This is because businesses can share data with their ecosystem partners while still keeping their proprietary algorithms and code private.

In addition, businesses can move their models to the data source, rather than moving their datasets to their models. By hosting proprietary models locally on their own private infrastructure, they can maintain complete control over those models, and apply the appropriate security measures to protect against model poisoning. Since the models stay local, the data also stays local. Organizations never have to let their raw data outside their own security domain, thus protecting against data poisoning.

Private AI can help businesses throughout their AI lifecycles

For businesses looking to capitalize on the possibilities of AI, the trend toward greater regulatory complexity may be daunting. Fortunately, working with Equinix to implement a private AI strategy can help customers address complexity at every stage of their AI maturity:

  1. Exploring AI: Many organizations see the public cloud as a logical entry point to AI, as it removes the need to deploy their own physical hardware. They can use private data and digital infrastructure services to capitalize on the benefits of public cloud while mitigating the drawbacks. For instance, NVIDIA Launchpad on Equinix Metal® can help scale AI workloads quickly. Also, cloud adjacent storage in strategic global metros can help customers maintain the control to move data and workloads between public and private infrastructure as needed. For instance, some workloads can remain on private infrastructure to protect sensitive data, while others can move to public cloud to capture the performance benefits.
  2. Do-it-yourself private AI: As their AI strategies mature, organizations may want to rely less on the public cloud. They can use their existing CPUs as they gradually implement GPUs for more power-hungry AI workloads, and evolve their data architectures accordingly. As GPU utilization increases, organizations will reach a tipping point where private AI begins to offer significantly better price and performance. Going forward, reference architectures will become more commonplace to support AI. Equinix’s recent announcements with Dell Technologies and HPE can help customers looking at this option.
  3. AI ecosystems: Accessing AI ecosystem partners at Equinix can help ensure efficiency, resilience and performance. Customers can access a range of AI as a Service offerings for rapid deployment of GPUs, models or datasets from different providers, all while maintaining control and privacy over their own data. Additionally, as AI ecosystems grow and evolve, we will see more platform providers come to market—specifically, more domain-specific language model (DSLM) providers.
  4. Fully managed AI: Customers that reach a certain level of AI maturity may wish to deploy a dedicated technology stack and begin consuming it immediately. We will see more providers offering dedicated and fully managed solutions. For example, Equinix offers a rapid deployment architecture with NVIDIA DGX in three sizes. Customers can use this solution to optimize time to value for their AI workloads, scale their global presence, and take advantage of Equinix’s sustainability leadership.

Organizations at any stage of AI maturity can use Equinix solutions to help ensure data compliance. This is because they can maintain control over data that needs to be protected or stored in a particular jurisdiction to meet compliance requirements. They can store sensitive data on private infrastructure, and bring their models to the data. This removes the need to transfer data into the public cloud, which would mean sacrificing control, potentially allowing cloud providers to store data in non-compliant locations.

To learn more about the Equinix approach to private AI, read our joint e-book with NVIDIA: Unleash new possibilities with private AI.

[1] Article 10: Data and Data Governance, EU Artificial Intelligence Act.

[2] Jai Vijayan, Samsung Engineers Feed Sensitive Data to ChatGPT, Sparking Workplace AI Warnings, Dark Reading, April 11, 2023.

[3] Article 15: Accuracy, Robustness and Cybersecurity, EU Artificial Intelligence Act.

[4] Ibid.