I’d like to think there was movement afoot to call to account social media companies and large organisations who use them unquestioningly, but this isn’t going to happen anytime soon. Those in a leadership position in large organisations could be calling for a ‘cultural swing’ as Bex Lewis comments in my previous blog post on a similar topic. In all the furor surrounding the misuse of social media to abuse, scare and taunt women MPs during the EU referendum and beyond, and the news female members of the Labour party have been subjected to [much more] online abuse than their male counterparts, there has been very little focused attention on the responsibilities of the companies who host these comments and corporate responsibility alongside individual responsibility.

In the months since I published The Web We Want Part 1, I have found out the following:

1. Social media companies are not regulated like more traditional providers of public services. Should they be? To give an example , during the Rio Olympics Twitter swiftly removed content videoed and shared by individuals that violated copyright laws, protecting big media companies. The journalist and thought leader Zeynep Tefekci writes eloquently about how embedded social media is within society and has become a necessity yet ‘our social commons on the Internet are now mostly corporate controlled’ How can Twitter respond slowly to trolls, yet within hours take down content that competes with commercial interests? They have demonstrated it is technically possible. If social media is playing such a large part in the democratic process, isn’t it time there is some regulation around how social media companies act?
2. The UK Digital Economy Bill is in its final stages. To me the planned bill appears slow to catch up with the speed of change and doesn’t respond adequately to the fact many social media companies tend not UK based, don’t pay appropriate sums of corporation tax, and aren’t regulated. I have read through the Bill and can’t find any mention of responsibilities regarding problems with unwanted communication except for the need for Ofcom to act around nuisance telephone calls. Unlike New Zealand, who are making cyber bullying a specific crime through a Harmful Digital Communications Bill, UK individual abusers can be prosecuted through three existing laws; the Malicious Communications Act, the Communications Act and the Protection from Harassment Act. Laurie Penney points out in her book Cybersexism: Sex, Gender and Power on the Internet ’Technically, threats of rape and violence are already criminal, and many social media companies, including Twitter, already have rules against abuse and harassment. Just like in the offline world, however, there is a chasm of difference between what is technically illegal and what is tacitly accepted when it comes to violence against women, and the fight back is less about demanding new laws than ensuring existing ones are taken seriously.’ Although I agree with Penny on the whole, I do think we are missing a chance to tighten up the law if we ignore the debate around whether social media companies should be regulated just as the telecommunications industry is and whether they should fund a regulatory body like Ofcom (as telecommunications companies have to).
All of this is important to how we use and approach social media in higher education. Recently I facilitated a technology enhanced learning workshop at a UK university. During one of the breaks a participant showed me how he had set up a Facebook page with over 7,000 members of a community who were keen to solve problems in a professional STEM context – this wasn’t linked to his job at the university, it was an additional interest. It was a thriving global community he was rightly proud of. He was hoping to explore ways in which he could mirror this success with his students. However, he admitted that on the negative side, the community did get trolls joining the group and then were abusive. More worryingly in many ways, these trolls posted dangerous solutions to some of the problems members of the community posed. The response from the community had been to ensure moderators were operating in different time zones ready to take down malicious content. How would you advise him to proceed about using Facebook to enhance student learning? I felt very disappointed the energy and work of someone wanting to share willingly was being so poisoned. I mentioned in an earlier post about a sneering attitude among some proponents of technology in education about the ‘walled garden’ approach to safeguarding students.  The only adequate suggestion I could make to my colleague was to create a closed Facebook group – but then isn’t this just a ‘walled garden’ approach by another name? Why should someone who is so willing to share without financial remuneration be left on his own to devise ways to avoid malicious posts? Why are social media companies so willing to keep their distance and avoid offering real, practical support for those who want to use social media for good? Is it time to think of the internet as a municipal park and if so, we want to protect it against becoming like Central Park was in the 1970s and 80s!