To what extent should robots and softbots be considered independent ‘persons’ with rights? And who owns big data? Klaus Heine, Professor of Law and Economics at the Erasmus School of Law is researching these questions. This coronavirus crisis is revealing privacy and big data issues, and his conclusions appear to be more important than ever: “I think it’s interesting to consider the idea of a non-human decider: an algorithm, a robot that has technical and legal independence and is responsible for data collection or interpretation of these data.”
Recently, scientist Yuval Noah Harari published an essay in the Financial Times about the world after the coronavirus and the implications on, for example, how governments monitor citizens, for example with a corona-app. Which privacy issues are under threat right now?
“I don’t think Harari’s starting point is very original, and it’s also somewhat black and white. From my background, I’d say mankind has the obligation to use available technology to save lives where possible, even if big datasets need to be used for this. An abstract concept such as privacy or freedom cannot override the life of a human being. The challenge is not to safeguard the privacy of big data at the expense of everything else; the challenge is to respect one value while retaining the other.”
What is a robot’s role in this?
“I think it’s interesting to consider the idea of a non-human decider: an algorithm, a robot that has technical and legal independence and is responsible for data collection or interpretation of these data. I know from my research at the Jean Monnet Centre of Excellence on Digital Governance that we have several scientists and engineers in the Netherlands who can encrypt data and can introduce algorithms that make it impossible for people to trace these data. For instance, you could program these in such a way that data only becomes visible of people who are in danger. Abandoning privacy as a value is not the solution; ensuring that we safeguard privacy by using even smarter technologies could be a solution. People should, however, be responsible for selecting the technology, and this should be legitimised democratically via governance.”
“The challenge is not to safeguard the privacy of big data at the expense of everything else; the challenge is to respect one value while retaining the other.”
Why is it important to confer legal status on a robot?
“That is the question: is this in society’s interests? The same question came up two hundred years ago in relation to companies, with the introduction of company legal status. A company, a private limited company, is distinct from a person. It has human crew but it also has its own legal status. The British colonial companies operating in America and India needed to confer a certain legal status on these companies. This was also a technological disruption at that time. A British parliamentarian made the famous statement: ‘Did you ever expect a corporation to have a conscience, when it has no soul to be damned, and no body to be kicked?' (Baron Thurlow, 1731-1806). We have similar issues today relating to AI.”
So the answer is ‘yes, it is in the interests of society’?
“The question is whether, as a society, we want to confer this status on AI. Yes, it could help better resolve liability problems. Or enable us to better facilitate intellectual property rights. But it does perhaps sound a little like a science-fiction story; ‘the robots are taking over’. How we shape such legal status is currently still a topic of scientific discussion. It’s also not something you can arrange in a week. It took two hundred years to establish all the differentiated company forms that we know today.”
“Sometimes you can no longer assign responsibility to a person."
Can you give a concrete example of why this would be handy now?
“You have softbots, for instance, that can arrange a contract with you by telephone. What happens if there’s an error in such a contract. Who is liable for this error? Is it the company that uses the softbot, or its programmer? Perhaps the programmer would say: ‘This is a deep learning intelligence softbot; it has learned to act like this from interviews.’ Sometimes you can no longer assign responsibility to a person.
And what incentive does such artificial intelligence have to comply with the law? It won’t be worried about a fine. These are major issues and are important for theorising too. Some colleagues say: the law doesn’t need to change; we just need to apply it correctly. Others say: this is an important technological disruption. We need to consider new legal frameworks, otherwise things will go wrong.”
And you belong to the second group?
“I believe that the new technology demands new legal frameworks, yes.”
Your other research topic involves who manages big data.
“Artificial Intelligence always goes hand in hand with big data. How should we organise our relationship with big data? Who owns the data? There are actually two governance models in the world. In the United States it’s the tech giants who are in control. In China there’s a kind of State monopoly-capitalism. Ultimately it’s the large-scale nature of big data that demands new regulations for both models. So we need to consider new forms of governance as well.”
“We could actually say the same things about big data and artificial intelligence as we say about nuclear energy; it has huge potential, but it’s not without major risks."
How can these kinds of issues ever be resolved?
“We were dealing with these same issues during the emergence of nuclear energy in the 1950s. It offered great advantages, but it was also high-risk. We could actually say the same things about big data and artificial intelligence as we say about nuclear energy; it has huge potential, but it’s not without major risks.
The decision we made about nuclear energy was that the government would be the owner. In Europe there’s EURATOM: an independent institute in Vienna, similar to the Central Bank and home to highly specialised staff who organise nuclear energy independent of governments. Something similar could be developed for big data so that it remains public and companies are allowed to use it. If things go wrong, you can pull back.
Such new models and new regulations are also needed to protect vulnerable infrastructures. It is not enough to say: ‘Private companies, please comply with the law!’ You need an authority to act as a watchdog. However, civil servants have insufficient technological knowledge. Technological knowledge at this level only exists in companies. So we need to develop a partnership; a system in which public and private converge. Democratically decided and arranged fairly.”
Isn’t the Netherlands too small to be part of this development?
“I think that Europe can play a significant role. How technology is embedded in a social and legal framework is just as important as the technology itself. Although Europe is perhaps not a leader with respect to technology, we can help create a legal framework that is attractive for people. And ultimately, this will also be able to attract financial capital. In Europe, the Netherlands can act as a kind of laboratory to find the best legal frameworks to bring mankind and technology together.”
- Researcher
- More information
Klaus Heine received a Jean Monnet Chair for Economic Analysis of European Law in 2012. He has been Director of the Jean Monnet Centre of Excellence on Digital Governance since 2019. The centre is a joint initiative of Erasmus University Rotterdam, Bar-Ilan University and the University of Leeds and searches for new institutional solutions for the disruptive challenges of digitisation.