Youth Perceptions of Risk, Law and Criminality on Social Media (press briefing)
/FOR PRESS ENQUIRIES CONTACT:
Email: Holly.Powell-Jones@city.ac.uk
Call: (0)20 7040 8788 for City University Press Office.
Download a PDF of the full thesis here.
___
A study with 11 - 18 year olds reveals the extent to which children are confused about the law when it comes to sharing indecent images, hate speech, or threats and abuse online…
Research from City, University of London analysed children’s responses to a variety of examples of online posts, including material that could constitute hate speech, sexual imagery, harassment, and allegations of child abuse.
Young people were asked to consider what they thought the legal risks might be for the person who shared the material, including whether they thought it was a criminal matter for the police.
The example with the least consensus about risk was a post representing the sharing of indecent images, potentially of a child (Post G - see appendix 1). This is despite a huge amount of work in schools and discussion within the media about the criminality of so-called ‘revenge pornography’ and ‘underage sexting’.
The next most debated post related to allegations of a sexually abusive teacher-pupil relationship (Post I - see appendix 1), suggesting children are unclear on either the procedure or imperative for safeguarding disclosures.
Overall, this research reveals a lack of consensus on what is risky to post, common misconceptions about the law online , and the need for clearer education for children in this area.
The research was conducted by Dr. Holly Powell-Jones: her social media law and ethics training for schools has been delivered to 10,000s of children since 2013, as part of a Police and Crime Commissioner funded project in collaboration with Eagle Radio Ltd. More info here.
Other main findings include…
· Disclosure of child sex abuse seen as risky: Common responses to an allegation of sexual abuse of a pupil by a teacher included: A) assertion the claims were probably false, B) concern over the potential damage to the reputation of the teacher or school, and C) concern for the victim being in trouble. None of the young participants suggested this was a safeguarding issue that ought to be raised, in fact disclosure was generally viewed as a bad idea.
· Perceptions of risk decreased with age: Older children (16+) were more likely to rate content as being ‘lower risk’ (e.g. purely social), while younger children were more likely to rate content as ‘higher’ risk (e.g. criminal category). This suggests techniques for justifying or excusing offensive content may be learnt as we age.
· Homophobic posts viewed as less risky than racist posts. Sexism/misogyny not recognised at all : Racist posts were commonly (but not unanimously) labelled as racist by young participants, and given higher risk categories. Homophobic sentiment was less clear/more debated among children, and more likely to be given a lower risk category. Labels of ‘misogynistic’ or ‘sexist’ were not used at all by any of the young participants, suggesting a lack of awareness of the meaning, a lack of vocabulary to describe the sentiment, or both.
· Abusive content viewed as ‘normal’/widespread online: Common justifications for low risk ratings were ‘this happens all the time’, ‘loads of people do/say this’, etc. suggesting the more young people encounter or witness abuse, the more likely they may be to assume there’s less risk involved.
· No policing of digital environment/ No consequences for abuse: Similarly, justifications for low risk ratings were frequently due to the perception of no consequences or sanctions for digital abuses. The perceived tolerance of offensive or hateful speech lowered children’s perceptions of risk.
· Young people were not ‘naïve’ about sexual offending: Many participants were comfortable – even confident – initiating discussions on rape, paedophilia, consent, etc. suggesting a high level of awareness of these issues. Children used the focus group space to discuss broader criminal and sexual issues, beyond those intended by the researcher. Adult assumptions that knowledge of sexual crime will be ‘damaging or harmful’ to children should be challenged.
· Victim blaming/ victims as responsible: The targets of online comments were responsibilised for their victimisation, either because of something they could have done before (e.g. a girl avoiding participating in a sexual video) or after (e.g. someone choosing not to be offended by comments about their sexuality) a message was posted. Equally, victims or targets were thought to be the ones likely to take action against a perpetrator, rather than an authority such as parents, school, social media companies or the police acting on their behalf.
· Free speech, ‘just words’ and ‘opinion’: It was common for posts to be justified as ‘just words’/’just saying’ (versus ‘doing’ anything), suggesting that publishing material online may not be viewed as an act by young people. Furthermore, that they may think of ‘harm’ or ‘criminality as having to be physical (not psychological, emotional etc.) Jokes, banter, humour, free speech, and opinion were also frequently used as justification for a lower risk rating for content. Further education is needed on the right to freedom of expression (Article 10 ECHR) and the limitations/restrictions of this in an online context.
· ‘No consensus? - No problem!’: Young people did not seem concerned about failure to reach a consensus among their discussion groups. Varying perceptions were largely viewed as ‘equally valid’, with little argument over who was ‘correct’ about the legal risks. While respect for the expression of different viewpoints is vital for healthy debate, motivation to reach a collective agreement is also a necessary social skill we need young people to develop.
Lead researcher Dr. Holly Powell-Jones says:
“We need to improve public education about law, crime and human rights in the digital realm and it’s high time this formed a compulsory part of a national curriculum. Young people are not just ‘consumers’ of online content, but are active creators and distributors, too. They need and deserve education on this.
“The fact that many teens often dismissed or downplayed harmful or criminal material is a real concern. This not only means young people might be more likely to misjudge the risks when sharing harmful content themselves, it also means bystanders may be less likely to report, or victims may be less likely to seek adult support or police intervention - even with serious criminal offences.
“We also have to ensure those who report illegal content or behaviour online see action from social media companies, schools, parents and/or the police, as responses suggest that trust in authorities seems weak.
“There is also a moral debate that needs to happen alongside legal education, in relation to rights and responsibilities - including freedom of expression - online. Children need to be more informed about where current criminal boundaries lie, but also be free to question and challenge laws if they disagree with them.
“Above all, we need to keep conversations open with young people. They represent the largest stakeholders in our digital future and their perspectives ought to be included as part of any strategies for change going forward. Policy-making adults have as much to learn from young people as they do from us, and we need to start listening to what they have to contribute.”
Note for editors:
· Findings are based on 184 participants, aged 11-18 years’ old (see appendix 4: sampling breakdown).
· Pupils were divided into Year Groups (7 – 13) for comparative age analysis.
· Participant observation and verbatim quotations taken during nine 1-hour focus group workshops, within two large state secondary schools in England.
· 12 example online posts were mocked up to represent a range of different behaviours, as stimuli for discussion (See Appendix 1). For more information about the law relating to criminal social media content or conduct, see: CPS guidelines for prosecuting cases involving communications sent via social media (updated August 2018).
· Pupils were asked to consider four categories of risk for the sender in each case, represented by a traffic light colour coding system: Criminal risk (e.g. arrest) = [RED] - Civil risk (being sued) = [ORANGE] - Social risk (informal sanctions) = [YELLOW] - No risk = [GREEN] – these were noted and quantified for each example post (see Appendix 2).
· The research was funded by the Centre for Law, Justice and Journalism at City, University of London as part of a doctoral scholarship award. The lead researcher was Holly Powell-Jones, and the supervisors were Prof. Chris Greer and Dr. Carrie Ann Myers - based within the Sociology department (SASS - School of Arts and Social Sciences).
· Inspiration for the study came from ‘Online and Social Media Law and Ethics’, a four-year county-wide pilot schools project, funded by the Office of the Police and Crime Commissioner for Surrey, and run in collaboration with Eagle Radio Ltd. In 2016-17, more than 15,500 pupils received educational workshops and assemblies from trained professionals, across more than 50 schools. (Reports with findings from this project are available here.)
APPENDIX 1 - Shows the 12 example mock up posts given as stimuli for focus group discussions.
A total of nine 1-hour focus group style workshops took place within two large state secondary schools. The number of pupils within each session, and within each group, were noted, as was the gender makeup* and age of the pupils.
The researcher took verbatim quote notes of pupil discussions during the session, and noted the number of times each of the four risk categories were mentioned (R= Criminal risk; O= Civil risk; Y= Social risk; G= No risk). Individual responses were amalgamated into one section so that no individual pupils would be made identifiable by their comments.
Although sexist posts (e.g. C, I and G) had the highest number of criminal risk ratings, they also had a higher number of lower (social) risk ratings, suggesting more debate and disagreement as to the level of risk involved for a sender in comparison with homophobic or racist content. Equally, ‘the threatening violence’ element in posts A and E likely contribute to the lack of ‘Green’ risk ratings. There is some indication from qualitative responses that ‘hate speech’ is unlikely to be viewed as such without this ‘threatening violence’ element, which is in part reflecting the legislation (Part III of the Public Order Act 1986). This is the case for sexual orientation and for religion, however not for racist hate speech. Sexism/misogyny is not currently categorised under hate speech or hate crime nationally, but is being considered as part of the Law Commission’s review into hate crimes.
Notably, the gender makeup of the group appears to affect the risk categories suggested by pupils for post G (suggested sharing of indecent images/a sexual video - potentially of a child), with mixed groups opting for lower risk ratings that all-girls groups*. This reflects findings from other research projects, which suggest men and boys are more likely to be vocal about their perspectives than women and girls when in a mixed-gender group.
Contact Holly.Powell-Jones@city.ac.uk for any enquiries, comments, or for further information about the study. Verbatim quotes from participants and a full PDF of the thesis can be made available to journalists on request.
_
*A note on gender observation: The question of whether or not to include/observe participant gender was difficult to address. On the one hand, it is generally acknowledged that ‘observing’ gender is hugely problematic due to participants’ right to self-identify as trans, intersex, or gender non-binary or neutral, and/or participants who may present as gender non-conforming (Bradford, 2007; Macdonald, 2013). However, failing to include gender also presented a research problem. Single gender focus groups are considered useful when exploring sexual practices and activities within school (Allen, 2004). Also, there is some evidence that girls tend to hold back and let boys take the lead during mixed group discussions (Shucksmith and Henry, 1998). There have been calls for more research to explore how topics such as sexting could be discussed and framed differently within mixed gender groups versus single-sex groups (Ringrose et al, 2012). In addition, issues such as ‘sexting’ and ‘revenge pornography’ - (see: example post G) - are inescapably linked to gender inequalities, making a ‘gender blind’ researcher lens illogical (Salter, 2017). For these reasons, the researcher took note of what was ‘observed’ to be the gender presented (boys or girls), while acknowledging that this, like much of the observational field notes, is highly interpretive and subjective, and open to challenges (p82).