WHAT DOES ‘DIGITAL SECURITY’ MEAN TO YOU?
Context and background are important when discussing issues of security. Our perceptions of security are different. We illustrated this with an introductory video of cases we had collected from around the world that we called What does digital security mean to you? Just like other issues that were discussed during the week – peace, food, environment, economics, health – digital security must be examined through an intersectional lens. Past generations have sought solutions to immediate problems, using the same method and logic for all of them; we know this cannot work. For some people, digital security refers to the right to privacy and the knowledge that private information is not being made available without consent. This is a western-centric problem as we seek to see our rights enforced by judicial institutions that were designed for this kind of challenge. For others, digital security is the most basic right to express oneself online without fearing negative repercussions, such as revenge porn, surveillance, harassment, violence, arrest. These kinds of concerns are predominant in countries which lack independent judicial institutions which will protect citizens and their right to free speech when expressing themselves online. For some, digital security refers to a safe, free, and accessible platform; for others, it is the idea that we should have the same kind of access as anyone else, regardless of ethnicity, gender, sexuality, economic background.
The workshop divided into 3 breakout rooms to consider different aspects of digital security.
HUMAN RIGHTS IN THE DIGITAL AGE This breakout room focused on abuse and human rights, like the ones we find offline. Perhaps abuse, racism, sexism, hatred are more prominent online, especially as certain public figures have appeared to legitimize this discourse. The physical separation between the abuser and the abused can mean humanity is taken out of the equation and the damage being done is not measured in the moment and might never even come to light. Yasmine Ouirhrane – an activist who deals with women’s rights and religious rights – spoke about the hatred and abuse she faces online for being a Muslim woman and for speaking out against racism and bigotry. This leads to the major question of who is responsible for monitoring this kind of behaviour. Bishakha Datta – also an activist – is worried about being compromised through some of her personal information being leaked or falling into the wrong hands. Artificial Intelligence (AI) is fast being integrated into large parts of our daily lives. Used to predict scenarios, AI is equipped with machine-learning technology whereby the machine can learn and adjust to different scenarios. How it learns will depend on the variables it is given. Joy Buolamwini has explained how the lack of diversity in design teams is leading to human bias being integrated into the machines.
TRUST AND TRANSPARENCY This breakout session focused on the relationship between users and the technology. Transparency requires information to be freely available and understandable for users. Most countries do not give information to users regarding the use of their personal data by the government or by private companies. This means that there is no regulation and no protection. For example, in Egypt or Venezuela the regimes routinely use technology to spy on and detain activists. The only existing piece of regulatory legislation of its kind in the world, the EU’s General Data Protection Regulation (GDPR), is not designed in a manner that is comprehensible and understandable by most people.
The use of the internet and online platforms overwhelmingly comes without any kind of subscription price. The most important social media platforms – Facebook, Twitter. Instagram, etc. – are free. Nevertheless, some of the wealthiest companies in the world are now technology platforms. They are indeed businesses and we must ask ourselves then, how do they make their money? The business plan is quite simple. The currency that many platforms rely on today is data, which can only be acquired through users and through them accepting that their data be used for various monetization purposes like advertisement.
Users are in fact not the ‘customers’ of these business, they are the ‘commodity’ being sold. Most of the money made by these companies come from third-party businesses who pay for analytics and data produced by technology platforms and allow for personalised content and targeted content to be presented to users. Although some states are beginning to take a close look at these issues (the EU through the GDPR) by and large there is no consensus amongst states that international law can apply to the cyberspace, thus rendering some companies immune to international regulations and in some cases to national legislation. Is it possible to design technology with integrated protection against exploitation? Could there be consensus amongst states on how to regulate tech companies?
Online platforms become echo chambers of false information – such as that divulged by different political parties to delegitimise their opponents or by foreign entities which seek to interference in politics – so verification becomes difficult.
CYBER-SECURITY This breakout session focussed on the criminal activities that might take place online and affect users directly. Many of us fear that our technology might be hacked or infiltrated by third parties; regulation to limit this kind of activity is not usually adequate. Furthermore, some internal laws in some states might be used to exploit our technology itself; this can take the form of surveillance and invasion of privacy.
Most companies do not want to take responsibility for the possible criminal or bad activities that take place on their platforms, as that would open them to intermediate liability, making them vulnerable to lawsuits based on what is being said or done through their platforms. One cannot rely on governments to implement meaningful regulation on tech companies, as these companies often contribute massively to a country’s wealth, or they have power superior to that of their state. This is where the need for an international consensus becomes paramount.
There is also a user responsibility to educate and raise awareness with regards to the dangers of using online platforms, teaching people how to be responsible online. Young people need to be educated in Digital Hygiene, learning how safely to navigate the online space.
ACTION POINTS
Individuals should:
Raise people’s awareness through education regarding digital hygiene thus promoting ‘digital literacy’;
Be firm in what they demand, and hold their local representatives to account for this;
Push for the implementation of a multi-stakeholder framework for digital regulation in their states.
Communities should:
Lobby governments to engage in the implementation of frameworks (such as GDPR in the EU) that will protect users and create better enforcement mechanisms;
Lobby for the implementation of education on the dangers of AI and the use of their data;
Address abusive online behaviour (racist, sexism hate speech, misinformation), implementing reporting mechanisms for issues of abuse in digital spaces.
Nations should:
Share best practices through capacity building amongst nations and empower international organisations through funding to build a safe and secure global digital architecture while bridging the digital divide;
Hold companies housed on their territory to account, implementing regulations to prevent Big Tech from becoming monopolies and more powerful than States;
Work towards a universally accepted normative framework concerning digital regulations and commit to providing binding enforcement mechanisms.
The International Community/UN should:
Continue to provide expertise and funding through programs pertaining to current digital trends & threats and dedicate more resources and means to Special Rapporteurs on digital issues (such as Privacy, Human Rights, Promotion of the Right to Freedom of Opinion and Expression);
Build the pathway/structure for a multi stakeholder approach in addressing digital issues. (officiating forums of discussions and gathering various actors, including companies, civil society, NGOs & human rights watchdogs);
Update our human rights to include proper and necessary digital human rights: UDHR 2.0.
Comments