House technology
Devika Kornbacher, co-chair of Clifford Chance’s Global Tech Group. global finance About technology regulation, litigation surrounding generative AI, and the global state of technology.
Global Finance: What is the state of case law regarding generative artificial intelligence (AI) and its derivative products? The authors are suing the company behind the largest generative AI engine for not having permission to use the content for training purposes. is suing. Where are we in the game?
Devika Kornbacher: new york times This is a groundbreaking incident that is attracting everyone’s attention. The charges and defense have been changed. Any basic model needs to be trained something to be useful. This is just a fact. That’s all you get when you train in a small universe. If your goal is to have accurate, rich, and smart generative AI, you’ll need to train it for everything, but there are intellectual property (IP), copyright, and compensation issues. We have to fight this to find the rules.
These cases raise several important questions. For example, one side claims that they are not infringing copyright because they read the website, learn from it, and move on. The discussion then turned to fair use and not reading. The charges and defenses have changed before the fight even started, and there is not enough legal precedent to predict with certainty how this will play out.
GF: How does the lack of case law affect innovation in generative AI?
Kornbacher: That is our challenge. There are many regulations that have yet to be implemented, such as the EU AI law. Technological innovators are moving ahead and playing by their own rules. While some parts of the world have decided to stifle innovation through subpoenas, other parts of the world fear being left behind. China already has regulations, innovations, and plans. In other countries, a dissonance exists between case law, regulation, and innovation.
GF: Is there a country that resembles the Wild West, with little regulation when it comes to innovation?
Kornbacher: Some argue that America is the Wild West when it comes to innovation, with everyone doing everything, but we do have some enforcement. Another area from a legislative perspective is China. While the rest of the world is still figuring out how to run races, China is showing more decisiveness in how they run races. China had regulations and rules in place that directed people to develop technology. Meanwhile, Europe, the US and the rest of Asia are still finding their balance.
GF: As companies develop technology, they seek to understand how business and regulation differ when crossing borders. What are the key issues for companies in this sector?
Kornbacher: There is a fear that one company will look at what others are doing and miss out on opportunities as they race to stay in the competition. Concerns are at the top of the list, especially when it comes to generative AI, as companies have been using AI for machine learning and analytics for some time. But in reality, most of them are in the same place. Many companies that talk publicly about their technology don’t yet have a production version.
Once companies realize that their competitors are not far ahead, they need to adjust their risk management programs. In financial services, companies have global risk management teams that need to understand the risks of using AI from these regulations and who is responsible for managing the risks that the use of AI poses to their organizations. Regulations do not always clearly define who is responsible if something goes wrong: the user of the AI tool or the provider of the AI tool. Companies are also wondering how to talk about their corporate and governance frameworks, and whether they should take a principles-based approach to their companies or take detailed, step-by-step steps toward compliance. Masu.
GF: Is AI regulation worth it? Why is there no regulation?
Kornbacher: From a regulatory perspective, we cannot develop a global solution that dictates what the whole world needs to do because each government does things differently.
The EU has always approached problems by drafting very comprehensive legislation and involving all member states. This is how regulation works. Regulations come from the top, are promulgated, and people begin to design programs for them.
There will not be anything fast and comprehensive in the United States, and there will be no comprehensive regulation of AI in the United States. Although it does not regulate all elements, there are executive orders that provide instructions to many government agencies. Current regulations are in use by many of these agencies, and others are enforcing against uses or developments of AI that are considered out of scope. It works for the US.
Now, let’s go to a place where governments have been transitioning for years and may be able to pass legislation quickly. You need to consider both how quickly regulations can be passed and whether you have the resources to actually enforce them. Brazil quickly passed legislation on privacy protection, but the law continued to be delayed due to a lack of enforcement. Is it urgent in that region to get regulation, or to find a way to actually enforce something, or simply leverage existing privacy laws to get more people to pay attention to AI? You need to think carefully about what you need to do. I don’t think racing within the regulations is the right answer for anyone anywhere.
GF: What global regulations can we expect in the future? What technological risks does this pose?
Kornbacher: The idea of global regulation or some kind of treaty doesn’t seem like it’s going to happen anytime soon. There are global principles that are beginning to take shape around accountability, safety, and transparency that are being incorporated into local regulations and even local enforcement.
Technology is developed locally and in some respects used and consumed locally, making global regulation much more difficult. When it comes to technology risks, while technology knows no borders, risks vary by region. Global companies that develop products that use AI-enabled hardware and use developers and suppliers from countries on the banned list are at increased risk of an embargo. If a company doesn’t use its suppliers, it will fall behind because the technology is already mature. Economic risks are costing us the potential to capture billions of dollars of market share over the next decade. All these risks are bundled with technical risks.
GF: Over the past few years, merger activity has declined significantly due to economics rather than regulation. But now the tables have turned, with technology regulations appearing to get in the way of some deals. How do you balance the need for regulation with the need for more trading activity?
Kornbacher: During the dot-com bubble, all the little startups were worth $1 billion and they were all going to change the world, but then the bubble burst. We moved from that world to technology consolidation, giant corporations, Big Tech.
Big Tech happens through merger activity. Instead of developing the technology in-house, they just buy it. We have experienced a world of technology integration and innovation, and then we have regulations that say companies are getting too big. Regulation can have an absolute impact on how integration and merger activities take place. I do think we’re moving to a stage where Big Tech is becoming Big Tech, but it’s no longer easy to buy everything and it’s going to be almost impossible to try to get two Big Techs into one. . You could be forgiven for seeing some large-scale breaches and misuse of information and blaming it on companies being too big.
You can’t watch every corner, so regulations dictate what you need to do. So I think technology regulation will have an impact on merger activity and how people think about it. Companies will be more cautious about acquisitions than in previous iterations of the tech market.