Thursday, April 06, 2017

The Rise of of Bot-based Collective Blocklists on Twitter...

Yes, it's a mouthful, but there is an emerging trend known as Bot-based Collective Blocklists that might already be influencing your social media feeds.

Here's the idea.  In networked publics such as Twitter, harassment campaigns are often launched by groups to intimidate, insult, and threaten specific targeted users.  From the perspective of democratic discourse, such harassment campaigns can cause a severe chilling effect on speech in these spaces, and typically it is up to the targeted individual to block their harassers one account at a time - which can be an overwhelming task, if not completely impractical.

As described by R. Stuart Geiger, one solution that has emerged to counter these harassment campaigns is known as collective blocklists.  Basically, instead of each individual having to block their many harassers, a community of such individuals can add their harassers' information to a blocklist that is then shared with, and can subscribed to by, the entire community.  It's a crowdsourced mechanism for curating the blocklist and, using the Twitter API, free services like BlockTogether.org will incorporate that blocklist into the filters for each member of the community's individual Twitter account.

On the surface, this may seem like a great idea, however it's a bit more complicated.  Some blockbot software curates its list of harassers by a single person, others use community-curated blocklists, and still others use algorithmically-generated blocklists.  And here's where it's problematic.  The algorithms used to block people are predictive and situated within a particular socio-technical context.

When filtering and gatekeeping is automated by software agents running predictive models in such a bottom-up manner, it lends itself to numerous potential pitfalls.  Any exclusive community is made capable of blocking any individual user types for any reason - and this would be structurally embedded in the source code itself.  As a consequence, voices may be silenced in networked publics solely for the reason that a community's algorithm predicted that their voices would not be well-received.  This is what Geiger refers to as "automated discrimination".

Yes, these bot-based collective blocklists can provide an immensely valuable public service by blocking harassment campaigns, but they could just as easily be used (and misused) to silence contrarian viewpoints and, in turn, further transform our social media feeds into mere echo chambers that reinforce social and political biases.

It's quite the double-edged sword, which is fine, but needs to be recognized as such.


  

2 Comments:

At 8:27 AM, Anonymous sophiasage said...

You made some decent factors there. I looked on the internet for the difficulty and found most individuals will associate with along with your website.Keep update more excellent posts.
Housekeeping services in Chennai
House cleaning service in Chennai

 
At 11:45 AM, Blogger Philips Huges said...


Its a wonderful post and very helpful, thanks for all this information. You are including better information regarding this topic in an effective way.Thank you so much

Personal Installment Loans
Payday Cash Advance loan
Title Car loan
Cash Advance Loan

 

Post a Comment

<< Home