The widespread interconnectivity and the proliferation of data has become an attractive target to criminals and other malicious actors who exploit it in a variety of ways. This may take the form of social engineering practices in which people are manipulated to provide secure or confidential information voluntarily or through hacking, when outside persons infiltrate a network of devices for various illegitimate purposes. Furthermore, in professional environments digital (sexual) harassment can bring devastating consequences for the often vulnerable victims.
Companies and public authorities are trying to address these risks by stepping up digital surveillance. This comprises a wide spectrum of initiatives – including the use of so-called smart applications – that are based on ‘big data’ and ‘artificial intelligence’. This enhanced surveillance may produce (social) benefits but also evokes specific new risks, and not only in the field of privacy. Thorough research is needed to understand the core elements of this shifting ‘playing field’.
Furthermore, the fact that digital platforms collect large amounts of data on markets and consumers to improve products and services poses serious challenges for data protection legislation, especially within critical infrastructures systems. Law enforcement agencies will be interested in the collected information which might include evidence of criminal behaviour of citizens. Since digital platform economies are often transnational, it is difficult for law enforcement to get access to this information. Traditional interstate judicial cooperation instruments are not effective in this regard. Therefore, new legal instruments must be created on a national and international level.
Finally, digital platform economies are dominated by highly competitive entities that focus on technological innovation and (artificial) intelligence, but typically still need some (low cost) human labour force. The latter offers opportunities for undocumented and other labour migrants. They will find themselves in an environment where labour relations are poor, leaving room for possible inequalities deriving from their migration status, ethnicity, and gender. They may also be compelled to engage in criminal online activities. Research in this field should focus on where this amounts to discrimination and exploitation of vulnerable workers.
We therefore pay attention to the many ways in which employers, employees and job candidates are downstream from securitisation of digital infrastructures. Theme 6 investigates the criminal and wider societal risks deriving from increased digitalisation, including how these risks develop as digital technologies become ever more ubiquitous and the magnitude of the threat they constitute? What vulnerabilities are incorporated in specific algorithmic approaches but also in regular social media communication? How do these and other examples of use of digital media facilitate crime and how should this be addressed? How could legislation help to mitigate cybercrime? What can be done to help victims of cybercrime, and how can we help people to recognize the threats? What changes in professional behaviour would be needed to better address cyber risks, e.g., by becoming more sensitive to virtual deception or to sharing data and in general by increasing cyber hygiene? How could digital support help to detect suspect online transactions, e.g., in the financial sector? We work with a multidimensional understanding of work and the workplace that includes public-facing forms of strategic communication.
To address this topic – both to understand and effectively respond to cyber risks – requires input from various disciplines. Both cyber-criminal activity and its mitigation have economic effects on an organisation and on its victims, costing time, productivity and resources. Given the vulnerabilities of systems, ensuring appropriate regulatory measures are made and enforced with the use of legacy, current and future technology. This requires effective communication and an understanding of the mediated practices of users. These differing approaches also provide an opportunity to develop more holistic approaches to cyber secure practices that move beyond binary divisions between technology and the human factor, towards the recognition that cybersecurity is always part of an integration of human practices with technological use. The potential connection of AI enabled identification of cybersecurity risks combined with behavioural changes at a personal and organisational level are key to achieving more stable and less vulnerable contexts in multiple domains.
Leads
- Alberto Quintavalla
Erasmus School of Law
Email address - Daniel Trottier
Erasmus School of History, Culture and Communication
Email address
Team members
- Elise Alkemade
Erasmus School of History, Culture and Communication
Email address - Aviv Barnoy
Erasmus School of History, Culture and Communication
Email address - Clara Boggini
Erasmus School of Law
Email address - Julie Hoppenbrouwers
Erasmus School of Law
Email address - Melinee Kositwatanarerk
Erasmus School of Law
Email address - Julia Krämer
Erasmus School of Law
Email address - Shu Li
Erasmus School of Law
Email address - M (Mthuthukisi) Malahleka
Erasmus School of Law
Email address - Larisa Munteanu
Erasmus School of Law
Email address - Enrique Santamaria Echeverria
Erasmus School of Law
Email address - Sascha van Schendel
Erasmus School of Law
Email address - Wouter Scherpenisse
Erasmus School of Law
Email address - Sophie van der Zee
Erasmus School of Economics
Email address - Cees Zweistra
Erasmus School of Law
Email address