definition cyberhate
the use of electronic technology to spread bigoted, discriminatory, terrorist and extremist information, manifest on website and blogs.
- It is creating a hostile environment and reducing equal access to its benefits for those targeted by hatred an intimidation.
the 5 parts of dark participation
Actors of dark participation
individuals
groups
influencers
media
reasons of dark participation
authentic evil: evil attacks: personal hate, pleasure (on individual)
tactical: more controlled and planned
strategic: large scale manipulation campaigns
physical appearance
political view
gender
Objects/targets of dark participation
religious groups
minority ethnic groups
gender groups
political groups
mostly 18-39 years old
unstructured
structured (situational)
systematic (long term)
social approval theory (not from walter, but the slides)
4 types of harm
typology of severity
dimensions of severity
ezelsbruggetje: ppaesuvms
how to cope with online hate
health vs professional consequences of online harm
Health consequences: fear, spiral of silence, anxiety, depression, stress…
Professional consequences: loss of productivity, reputational damage, loss of confidence, stopping covering, self-censorship
walter about social approval theory
People generate hate messages online primaril to accrue signals of admiration and praise from sympathetic online peers and to make friends
hypotheses and results article Frischlich et al: how personality, media use and online experience shape uncivil participation
RQ: dark personality traits (political attitudes and emotions, media use, users’ experience (civil or uncivil)) leading to own uncivil behavior
Results: 46% who witnessed incivility also engaged in uncivil participation
- high civil comments and hate speech gave a strong prediction of uncivil participation
- strongest predictor: personal experience with online victimization
- RQ confirmed (overall)
definition dark participation
It is characterized by negative, selfish or even deeply sinister contributions to the news-making processes (such as “trolling”)
what solutions do people have of online hate
risks of removing anonymity
ezelsbruggetje: vcrcipsph
differences between automated content moderation and human moderation
automated content moderation:
- removes problematic content without direct, consistent human intervention (goes automatic)
- example: chatGPT
human moderation
- moderation of tweets by humans
- leads to dissatisfaction & emotional exhaustion
2 types of automated content moderation
definition content moderation
the detection of, assessment of, and interventions taken in content or behavior deemed unacceptable by platforms or other info intermediaries
- including the rules they impose, the human labour and technologies required, and the institutional mechanisms of adjudication, enforcement and appeal that support it
3 types of approaches to content moderation and their explanation
safe space vs. freedom speech
snap ik niet helemaal ;) 2 voorbeelden met experimenten
certain type of power-shift of media over the last decades
mainstream media and political insitutions los most of their power –> unmediated populist nationalism for the internet-age
explanation “knowledge, info and data triangle”