The reality of implementing AI
AI adoption continues to grow rapidly, driving increasing investment and demand for AI skills – for most the question of adoption is not “if” but “when” and “how”. According to a recent McKinsey survey, GenAI is starting to deliver value across business operations, particularly in marketing, sales, and product development. Those businesses are adopting AI to address specific challenges and enhance productivity, customer service, and revenue streams.
Businesses are learning that they must address the legal, reputational, and practical challenges from the outset and on an ongoing basis and this requires a comprehensive strategy that includes clear goals, performance metrics, and a robust governance framework. Key areas of focus include managing data infrastructure, navigating regulatory landscapes, ensuring data protection, addressing IP issues, enhancing cybersecurity, and managing supply chain and talent risks.
A human-centred approach is vital to ensuring ethical use and positive employee engagement. Cultural change may be required to deliver the strategic holistic approach necessary to leverage AI effectively and responsibly.
Read more: Scaling AI to drive value: Addressing the challenges
"AI is impacting organisations in diverse and profound ways and at a speed and complexity that creates a range of risks. To drive value from AI, organisations need to address the risks in a holistic way."
Sonia Cissé
TMT Partner, Paris
AI specific regulation emerging across the world
China has created the most highly regulated landscape for AI, and passed laws addressing algorithms, deep fakes and GenAI. This year it issued guidance on training data and model security, data protection, and the government's security assessment process. In Asia more generally, new voluntary guidance was published for the ASEAN region and specifically for the development and procurement of AI in Hong Kong and for GenAI in Singapore.
The newly adopted EU’s AI Act has tiered obligations which start to bite from January 2025. The UK has held back on legislating specifically on AI, relying on key regulators to use existing powers and frameworks. This could change with a new government.
Meanwhile AI litigation and regulatory enforcement is shaping the legal landscape in the US. State-led legislation is emerging with Colorado’s new AI law potentially setting a benchmark.
Competition regulators are focused on the anticompetitive effects of AI, and how to flex existing regulatory powers. Antitrust investigations have opened into some large tech players and agencies are considering partnerships between tech incumbents and more nascent startups.
Read more: Developments in the global regulatory approach to AI
"Building on the momentum of the AI Executive Order, the AI Bill of Rights, and various state proposals, Colorado passed the most comprehensive AI law in the US this May. With a classification system akin to the EU’s AI Act, it seeks to protect against algorithmic discrimination and is expected to significantly impact several sectors."
Ieuan Jolly
TMT Partner, New York
Navigating IP challenges
As companies embrace AI, they must consider the risks of IP infringement. High profile cases in the US, the UK and China highlight the risk of litigation, with many lawsuits alleging unauthorised use by AI developers of copyrighted content to train Large Language Models.
And while there is an increasing trend of GenAI providers offering indemnities covering customers for unexpected, non-foreseeable infringements, they will not protect customers who knowingly prompted the infringing response or should have known the response is likely to infringe. Therefore customers should reinforce their internal policies and implement effective risk management.
In the meantime, AI developers are increasingly negotiating with publishers to license their copyrighted content to train LLMs, allowing rightsholders to monetise their content. But the future of licensing will depend on how the courts determine how copyright law applies to LLMs. Companies will need to track developments closely as the legal landscape is likely to differ across jurisdictions as it evolves to keep pace with tech advances.
View more: AI Webinar Series: The Latest IP Developments
"With the advent of GenAI, content creation is faster than ever and companies need to upskill employees to avoid the pitfalls of IP infringement in a shifting legal landscape."
Paul Joseph
IP Partner, London
AI and the energy transition
Generative AI has brought new insights into the role of AI in supporting transition to net zero but it has also prompted questions about how the growing energy demands of AI can be met sustainably.
Energy companies are already leveraging AI, such as tools for energy use analysis, forecasting and optimisation, thus reducing carbon emissions. However, AI, and particularly GenAI, requires vast amounts of data and computing power and this is driving ever greater demand for data centres and for electricity to power them.
US Big Tech companies have made significant net zero commitments and invested heavily in climate tech. Long-term power purchase agreements for renewable energy from major tech companies can provide strong investment security and enable renewable project financing on attractive terms, providing the basis for scaling renewable investment.
The growth in AI is therefore likely to continue to sustain strong growth in investment in the digital infrastructure needed to support it, and to shape the energy sector in 2024 and beyond.
"While AI’s demand for compute capacity is driving a significant amount of the data centre investment that our clients are undertaking, customer expectations and regulations regarding the use of renewable energy and reuse of heat byproducts are now key factors in site selection."
Alaister Johnson
TMT Partner, Singapore
AI in payments and digital identity
AI will continue to revolutionise how consumers shop and how businesses track and influence consumer spending. In the evolving world of payments, AI has been hailed as critical technology to improve the customer experience and enable faster, more frictionless payments.
Unfortunately, the flip side of AI-enabled faster payments is AI-enabled “faster fraud” and the industry is beset with a persistent increase in fraudulent activity. Yet when it comes to combatting fraud, AI can also be part of the solution, for example, AI can be used for verifying identity and scanning transactions to predict whether they are legitimate. At the same time, regulatory initiatives are under way at a European level to increase the adoption of digital identity products, thereby enhancing access to services and mitigating identity fraud risks.
Financial services regulators are laser focused on the consumer and are seeking to evolve the regulatory framework to address increasing risks of customer harm. For payments providers, ensuring compliance when deploying AI in an evolving regulatory landscape is essential to responsible innovation and maintaining customer trust in their products and services.
Read more: Unlocking the future of payments: The role of AI in transforming transactions
"As they deepen their reliance on AI systems, payments firms must carefully consider the regulatory risks. They should aim for an AI compliance strategy which integrates financial regulatory concerns with data protection and other areas of legal risk."
Julian Cunningham-Day
Global Tech Sector Leader and Global Co-Head of Fintech, London
Using AI to support online safety
There has been rapid growth in regulation of how internet users – particularly children – are treated online. And compliance with the varying requirements across the world has become ever more challenging.
We have seen regulations targeting illegal and harmful online content, most recently the EU’s Digital Service Act and the UK’s Online Safety Act. Children’s personal data has been afforded a higher standard of protection in various jurisdictions for several years (e.g. the Children’s Online Privacy Protection Act in the US, and the UK ICO’s Children’s Code) and we have recently seen rules introduced to protect minors in specific areas such as gaming regulation in China, a proposed law to introduce a minimum age for social media use in France and various proposals at a federal and state level in the US. We are also seeing claimants pursuing class actions against platforms for online harms in the US and increasingly in Europe.
And while there are fears that AI can exacerbate online harms in creating and disseminating harmful content such as deep fakes, AI solutions are being used for verifying age and identity, detecting illegal and harmful content, and automating mitigation. As tech advances and new regulations come into force, companies will need to continue to build and refine their compliance solutions to meet the requirements of the new regimes.
View more: Games and Interactive Entertainment Webinar: The new regulatory frontier – online safety and privacy
"With increasing regulation and litigation in online safety, companies must embed effective compliance and governance practices to protect users from illegal and harmful content. Though AI can raise risks in this area, it also will continue to play a key role in mitigating harm by helping companies identify and address harmful content at-scale."
Ben Packer
Litigation, Arbitration and Investigations Partner, London
AI ambitions in the Middle East
Becoming a principal global hub for AI technology development and deployment is an ambitious goal that the United Arab Emirates, the Kingdom of Saudi Arabia and other regional leaders in the Middle East have set themselves in diversifying their economies beyond oil and gas. The region is recently seeing increased interest in GenAI investments in many key sectors of the economy due to the growing adoption among companies and what is characteristically a young and mobile first consumer base.
Governments in the region are actively investing in AI R&D and deployment, a push driven by government initiatives and joint-ventures, corporate partnerships, and a surge in VC funding for AI startups and the development of local data centre infrastructure to support local AI uptake.
The Kingdom of Saudi Arabia and the UAE in particular are looking to support innovation with business-friendly regulation of AI as well as comprehensive data protection regimes aligned with international best practices. This year Saudi Arabia has established a $100 billion fund to invest in AI, while the UAE, with its reputation as an appealing destination for skilled workers and entrepreneurs, is focused on attracting AI talent and businesses, for example through Microsoft’s recently announced investment of $1.5 billion in UAE-based G42, supporting the expansion of the region’s AI workforce.
Read more: The Middle East emerging as an oasis for tech and fintech
"The UAE, Saudi Arabia, and other Middle Eastern nations are embracing AI. While the regulatory approach to AI and data protection varies between countries, regulation is generally light touch and businesses are pushing for greater regulatory harmonization. This evolving landscape offers both opportunities and challenges for AI adoption in the region."
Nick Roudev
TMT Counsel,Dubai