Servicios
Servicios
Buscar
Idiomas
P. Completa
Social media and its intersections with free speech, freedom of information and privacy. An analysis
Francisco Segado-Boj; Jesús Díaz-Campo
Francisco Segado-Boj; Jesús Díaz-Campo
Social media and its intersections with free speech, freedom of information and privacy. An analysis
Las redes sociales y sus intersecciones con la libertad de expresión, la libertad de información y la privacidad. Un análisis
Mídias sociais e suas interseções com liberdade de expressão, liberdade de informação e privacidade. Uma análise
ICONO 14, Revista de comunicación y tecnologías emergentes, vol. 18, no. 1, pp. 231-255, 2020
Asociación científica ICONO 14
resúmenes
secciones
referencias
imágenes

Abstract: Nowadays, there is a growing debate on the impact of social media on society, particularly on its potential negative effects. Therefore, this research is focused on three intersections of social media and fundamental freedoms: free speech, freedom of information and privacy. We begin analyzing social networking sites and social media role and evolution since their birth at the beginning of 21st century and remarking their positive aspects. Our objective is to identify malpractices related to social media and fundamental freedoms. A review of the literature is presented which outlines those malpractices. This review highlights some issues, such as arbitrary censorship, boundaries of free speech, misinformation, diversity of sources, visions and views, user content and privacy settings, and data profiling. Finally, we propose some solutions for each one of those issues.

Keywords:Social mediaSocial media,Fundamental freedomsFundamental freedoms,Free speechFree speech,Freedom of informationFreedom of information.

Resumen: Actualmente existe un debate creciente sobre del impacto de las redes sociales en la sociedad. Los potenciales efectos negativos de estos medios han despertado el interés y la cautela de los académicos. Esta investigación se centra en tres intersecciones de las redes sociales y las libertades fundamentales: libertad de expresión, libertad de información y privacidad. Se analiza en primer lugar las redes sociales y su evolución desde su nacimiento a principios del siglo XXI, y destacando sus aspectos positivos. Se propone así identificar las malas prácticas relacionadas con redes sociales y libertades fundamentales. Se presenta una revisión de la literatura que subraya esas malas prácticas. Esta revisión destaca asuntos como la censura arbitraria, los límites de la libertad de expresión, la desinformación, la diversidad de fuentes, visiones y perspectivas, el contenido del usuario y los ajustes de privacidad, y la creación de perfiles de datos. Por último se proponen algunas soluciones para cada uno de estos asuntos.

Palabras clave: Redes sociales, Libertades fundamentales, Libertad de expresión, Libertad de información, Malas prácticas.

Resumo: Atualmente, há um crescente debate sobre o impacto das redes sociais na sociedade. Os potenciais efeitos negativos desses meios de comunicação despertaram o interesse e a cautela dos acadêmicos. Esta pesquisa enfoca três interseções de redes sociais e liberdades fundamentais: liberdade de expressão, liberdade de informação e privacidade. Primeiramente, são analisadas as redes sociais e sua evolução desde o nascimento, no início do século XXI, destacando seus aspectos positivos. Assim, propõe-se identificar más práticas relacionadas às redes sociais e liberdades fundamentais. Uma revisão da literatura que destaca essas más práticas é apresentada. Esta revisão destaca questões como censura arbitrária, limites à liberdade de expressão, desinformação, diversidade de fontes, visões e perspectivas, conteúdo do usuário e configurações de privacidade e criação de perfis de dados. Por fim, são propostas algumas soluções para cada um desses assuntos.

Palavras-chave: Meios de comunicação social, Liberdades fundamentais, Liberdade de expressão, Liberdade de informação, Más práticas.

Carátula del artículo

Innovación teórica

Social media and its intersections with free speech, freedom of information and privacy. An analysis

Las redes sociales y sus intersecciones con la libertad de expresión, la libertad de información y la privacidad. Un análisis

Mídias sociais e suas interseções com liberdade de expressão, liberdade de informação e privacidade. Uma análise

Francisco Segado-Boj
(Complutense University of Madrid), España
Jesús Díaz-Campo
(International University of La Rioja), España
ICONO 14, Revista de comunicación y tecnologías emergentes, vol. 18, no. 1, pp. 231-255, 2020
Asociación científica ICONO 14

Received: 04 March 2019

Revised: 22 April 2019

Accepted: 29 August 2019

Published: 01 January 2020

A first version of this paper was presented to the Sub-Committee on Media and Information Society of the Council of Europe in October 18, as an expert report for the committee report on “Social media: social threads or threats to fundamental freedoms?

To cite this article:

Segado-Boj, F. y Díaz-Campo, J. (2020). Social media and its intersections with free speech, freedom of information and privacy. An analysis, Icono 14, 18 (1), 231-255. doi: 10.7195/ri14.v18i1.1379

1. Introduction

The growth and diffusion of new technologies, such as Social Media, raises new ethical issues (Díaz-Campo & Segado-Boj, 2014). This essay focuses on three intersections of Social Media and fundamental freedoms:

  1. 1. Free Speech, understood as the freedom of citizens and organizations to express themselves and spread accurate facts, information and opinions through social media venues.
  2. 2. Freedom of Information, understood as the freedom of citizens to access accurate, fact-based and relevant information through these venues, and avoiding manipulative and deceptive content which could drive to social fractures.
  3. 3. Privacy, understood as the right of citizen to regulate the access of third parties to their personal data.

1.1. Positive aspects of Social Media

Since their birth, social media have evolved from leisure oriented venues into platforms for social interaction, information and civic debate (Guallar, Suau, Ruiz-Caballero, Sáez, & Masip, 2016). They have turned into massive spaces where a huge portion of citizens develop at least a part of their social lives. Just as an example, Facebook grew from almost 1 million users in 2004 to more than 2.000 million in 2019 (“Number of monthly active Facebook users worldwide as of 1st quarter 2019 (in millions),” 2019). In such spaces citizens are exposed to information on political issues and public affairs even if they are not actively seeking for such information (Fletcher & Nielsen, 2018).

Facebook and their precedents such as Myspace, started as platforms where people could socialize with their friends and make new contacts (Boyd & Ellison, 2010). Thus, social networking sites can be used as tools to cultivate and grow social connections with other citizens (Lin, 2019). That means that social media can be useful not only to maintain and straighten existing links among users but also for creating new links and opening up new connections (Phua, Jin, & Kim, 2017). Plainly said, social media are useful platforms not only for getting closer to people who already know each other but also, for getting people to get in touch with new contacts and grow the number of people they know (Ellison, Steinfiel & Lampe, 2007).

As they evolved, social media became also relevant venues for the distribution and circulation of information and news about political and civic events (Anspach, Jennings, & Arceneaux, 2019). Citizens are no longer the final end of a line in the news distribution circuit, but a node interlinked in a wider network (Carlson, 2016; Noguera-Vivo, 2018). Social media have thus turned into an extension of the old public sphere, a new ‘digital public sphere’ where civic and political affairs are put in common and debated (Masip, Ruiz-Caballero, & Suau, 2019).

These new public spheres have strengthened the mobilization and organization of social protests and movements (Agur & Frisch, 2019; Mundt, Ross, & Burnett, 2018). Social media are a useful channel for alternative parties, minorities or outsider groups frequently silenced in major legacy media to circulate their ideas and attitudes (Bekafigo & McBride, 2013), and to channel political participation (Said-Hung & Segado-Boj, 2018). Thus, social media can provide rich full and alternative sources of information for events which are silenced or marginalized on traditional legacy media and other regular or conventional media venues (Tufekci & Wilson, 2012). In other words, they can at least theoretically foster and guarantee the needed plurality of voices in any given society.

In spite of the fear of an echo chamber where users are mainly exposed to conforming opinions and less able to find cross-cutting perspectives (Sunstein, 2009), social media sites have the potential to expose citizens and users to more diverse sources, and as a consequence, citizens may have a wider, more diverse picture of events and versions of relevant and transcendent stories for civic participation and social events (Dubois & Blank, 2018a; Masip, Suau-Martínez, & Ruiz-Caballero, 2017). As people are incidentally exposed in social media to news and sources they wouldn’t actively chose in other environments, that means that social media have the potential to expose individuals to ideologically cross-cutting sources and versions of events (Bakshy, Messic & Adamic, 2015). This has also a relevant consequence because users who spend more time in social media are exposed to more news, and also, show a higher degree of political participation in civic engagement not only online but also offline (Gil de Zúñiga, Jung & Valenzuela, 2012).

The main objective of this research is to identify some malpractices related to social media and fundamental freedoms and to propose some solutions for those issues.

2. Methodology

A review of the literature is presented which outlines malpractices related to social media and fundamental freedoms, such as free speech, freedom of information and privacy. This review covers articles from 2008 to 2018 that deal with those issues.

Scientific publications carried out in the last 10 years (2008-2018) (n=148) were retrieved from Scopus database (2018) with the search terms “Social media” AND “Rights” AND “Freedoms”.

Following Segado-Boj, Grandío & Fernández (2015) a thematic analysis was developed to identify the main debates in the literature and the most relevant topics covered in the field. Once those issues were identified, practical proposals were suggested to address the main intersections of social media with fundamental rights and freedoms.

3. Results
3.1. Free speech

Free speech has been a basic principle of democracy, especially since the very rise of the printing press (Sunstein, 1995; Martin, 2001). Every time a new medium is developed, pressures to control the creation and distribution of content through it also appears. This general principle of freedom of expression can be spread and applied to social media.

The main issues regarding free speech and social media are the following:

3.1.1. Arbitrary censorship

Social media have turned into a public sphere where political and social interest information is circulated and discussed but also remain centrally designed and controlled systems (Plantin, Lagoze, Edwards, & Sandvig, 2018). Among other consequences, this means that companies have acquired the potential power to control the information flow as well as hide or silence issues. Social media companies control all the information which circulates publicly through these outlets (Plantin et al., 2018).

The upside of this situation is that social media can turn into allies in order to detect, prosecute and stop content and propaganda related to criminal activities, hate speech or terrorist organizations.

The downside to the power which social media sites exert upon the content circulating through them is that, in fact, they can unilaterally remove posts and information on their sites at their will, which poses a threat to freedom of speech.

As mentioned, in some cases, they exercise this power to safeguard society from unlawful or harmful material, as it happens with pornography, explicit violence or other kind of topics. But frequently, social media companies are accused of arbitrarily censoring content as happened with feminist movement FEMEN, which was accused by Facebook of ‘promoting pornography’ given the use of nudity in their protests (Leite & Cardoso, 2015).

3.1.2. Boundaries of free speech

There is a common understanding that free speech is not absolute but in fact it is limited by other fundamental rights. Nowadays the most controversial issues drawing attention towards these boundaries are criminal organizations and behaviors, hate speech, fake news and defamation.

Even in the United States, where free speech is guaranteed by the First Amendment to their Constitution, a social networking site might be prosecuted if it is probed to host messages and material which might be responsible for advocating and supporting terrorist actions or terrorist organizations (Tsesis, 2017a).

Anyway, the very own concept of terrorism might be used as concept to enhance censorship and retaliations against journalists or even individual users. Following the case of the United States, legal actions against social media platforms and Internet providers can only take place in very specific scenarios where messages clearly instigate terrorist actions, recruitment for criminal organizations, and promote indoctrination (Tsesis, 2017b).

3.2. Freedom of information

Social media have turned into a mainstream news provider for a significant proportion of worldwide population. Thus, in the United States, 68% of people follow news on social media at least occasionally (Matsa & Shearer, 2018). In Europe, almost half of Spanish Internet users (48%) usually get news through Facebook (Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2018) Even though users do not commonly create informative content themselves -the so-called citizen journalism (Wall, 2015)- , commenting and sharing news have turned into a frequent behavior (Masip, Guallar, Suau, Ruiz-Caballero, & Peralta, 2015; Noguera-Vivo, 2018) and an essential component of social media venues, as discussed above.

In this sense, initiatives should be taken in order to guarantee that social media are a reliable channel for distributing and getting in touch with accurate, balanced and factual information, which is also a fundamental axis of democratic societies.

3.2.1. Information quality: detecting and flagging fake news

Specially after the last USA presidential elections, social media (mostly Facebook) have been accused of influencing the voters and the results through the information they allowed to distribute. From all the issues regarding this topic, the one which gained the most attention has been the so called ‘fake news’ (Allcott & Gentzkow, 2017). Those can be defined as ‘fabricated information that mimics news media content in form but not in organizational process or intent’ (Lazer et al, 2018). This broad concept gathers pieces of content related to news satire, news parody, fabrication, manipulation, advertising, and political propaganda (Tandoc Jr, Lim & Ling, 2018).

One side effect of those fake news and other information disorders such as partisan information or low quality journalism stained by sensationalism or editorial bias is a generalization of a disbelief in journalism and the media sphere in general (Ardèvol-Abreu & Gil de Zúñiga, 2017; Chadwick, Vaccari, & O’Loughlin, 2018).

In addition to this, Social Media have generalized a new kind of news consumption, more superficial and isolated (Müller, Schneiders, & Schäfer, 2016). In the traditional model news were presented and received in a structured package, ordered in a hierarchical structure and delivered under a wide frame which allowed users to interpret and give sense to the message. In this new model, news and civic issues appear along with non-news content and other type of messages (Groot Kormelink & Costera Meijer, 2019). The current social distribution model makes difficult for users to attribute the messages they consume to the medium such message was published at (Kalogeropoulos, Fletcher, & Nielsen, 2018).

3.2.2. Diversity of sources, topics and views

Information and news reach audiences and social media users mostly through an automatized and personalized process of selection driven through carefully designed algorithms (Bucher, 2012). Algorithms can be defined as ‘embedded programs that analyze past user data and search history in combination with other users’ searches and history to calculate digital outcomes,(...) and present consumers with feeds that represent their own unique immersive media environments” (Cohen, 2018). Those algorithms are key parts of the technological development of social media and other Internet based platforms and environment, even though only 29% of people know that Algorithms are responsible of the information which appears at their timelines and social media news feeds (Digital News report, 2018). Even though, algorithmic selection is starting to be more appreciated by users than traditional editorial curation as a response to the perception of excessive media bias, among other factors (Thurman, Moeller, Helberger, & Trilling, 2018).

Yet algorithms themselves do not guarantee balanced or unbiased information. Evidences point that algorithmic selection in no case guarantees balanced purveyance of information. In fact, algorithmic filtering and priming of news can be biased by human and technological features to determine the nature, orientation or origin of news (Bozdag, 2013). In this sense it has been warned that one of the greatest perils of Artificial Intelligence might be the proliferation of biased algorithms (Knight, 2017).

The output of those algorithms is a selection of news fit to personal interests and preferences of each particular user. One particular person Facebook news feed is particular and unique. In this sense algorithmic content selection on social media has been accused of encouraging increasing individualization and inequalities (Just & Latzer, 2017). This is radically different to mass exposure to a same common media agenda and selection of topics, as happened with legacy media. This new trend in news consumption is has led to a lack of exposure to diverse sources of information. This phenomenon is known as ‘filter bubble’ or ‘echo chamber’, a metaphor which tries to illustrate the situation where users are only submitted information that reinforces their prejudices and existing views (Pariser, 2012). This factor contributes to radicalization and growing partisanship in society. Yet, some empirical studies point that this effect might be overstated and its consequences might be weaker than previously thought (Dubois & Blank, 2018b).

3.3. Privacy

As stated in the resolution “The protection of privacy and personal data on the Internet and online media” (Rihter, 2011), privacy is another fundamental right affected by digital and social technologies.

One of the issues in this regard is the use of personal information. Digital technologies allow platforms and service providers to gather and analyze multiple information about their users. In some cases, these data are employed with inner purposes (such as evaluating the performance of content or analyzing the behavior of the users to improve some features, to name just a few). In other cases, that information is sold to third parties as a part of social media companies’ business model. This data can even be collected directly from other parts as it happens with ‘semantic polling’ techniques (see Brasseur, 2014).

3.3.1. User consent and privacy settings

Related to this topic appears the kind of information which can and will be collected on those platforms. Users are mostly unaware of the data a given service can collect from their activity. One of the most meaningful examples is what Facebook labels as ‘self-censorship posts’ (Golbeck, 2013). This social media site registers and files everything the user posts and writes on its environment -every post, every comment-, even though it is later deleted and never published.

In this context, user consent is fundamental. When users join and access a social media site, they are accepting a series of terms and conditions of use, assimilated to a contract, but whose implications are rarely understood. They are usually presented to users in an obscure and complex jargon, given that their primary aim is to avoid litigations rather than clearly communicating the implications of the platforms (Pollach, 2007).

3.3.2. Data profiling

It has been known that Facebook allowed advertisers to address groups identified by its algorithm as ‘jew haters’ (Angwin, 2017). Other studies (Wang and Kosinski, 2018) point that automatic data classification can be used to identify homosexual users, even though no information is explicitly provided to the platform about the user’s sexual orientation.

This profiling raises concern about the information that social media companies gather about their users and the application they make about this information. But it also should rise worries because of the kind of micro-targeted advertising that could be addressed thanks to this data profiling.

4. Conclusions: Solutions and proposals
4.1. Free Speech

4.1.1. Arbitrary censorship

Social media companies should clearly comply with the legal requirements in each national setting to forbid unlawful material to spread through their users profiles. As mentioned above, special attention should be paid to terrorism, criminal organizations, hate speech and defamation.

In order to avoid accusations of censorship, laws should clearly state the requirements and definition of what does constitute ‘terrorism’ or ‘hate speech’ crimes.

In the case of defamation caused by misleading content addressed to harm individuals or social groups, legislators could take initiatives to protect people directly damaged by the spread of fake news (Lazer et al, 2018). One of such measures could consist in the figure of the Internet Ombudsman, which might help to gather and channel cases of people affected by misinformation and disinformation spread through social media.

4.1.2. Boundaries of free speech

Once the lowest bar is set by the national regulations on free speech, social media providers should clearly state the requisites and cases of what do they consider appropriate content. In an ideal scenario, no particular social media service provider should set additional obstacles to the circulations of content, ideas and accurate facts than there are already set by national regulations. Lawful (no matter how controversial) political ideas and content should be permitted and under no circumstances silenced or censored on social media spaces.

Even in the case that social media services set their own pack of rules for content, they should provide clear and unambiguous conditions and examples of the requirements that would make a content to be considered ‘unacceptable’. The clearer and well defined those characteristics are explained and detailed, the less room there would be for arbitrary interpretations and censorship from the social media company.

Last, responsibilities of social media companies should be defined in order to collaborate with national authorities to detect and denounce as soon as possible this kind of potential unlawful content.

4.2. Freedom of information
4.2.1. Information quality and fake news

As mentioned above social media companies take no direct role on elaborating or creating the news which circulate through them. But, as enterprises who take profit of the content they distribute, they should take active part in identifying and warning their users about inaccurate or false content circulating through their venues.

This could be achieved through the implementation of automatic detection techniques, mostly through two major methods: linguistic cue approaches and network analysis approaches (Conroy, Ruben and Chen, 2016). For example, a key piece in network analysis approach is identifying bots, that is, fake accounts of users driven by a piece of software used to initially disseminate, repost and drive attention into the delivered fake users. Those bots can be identified by their behavior, so that social media sites could be asked to develop procedures and mechanisms to exclude bot generated messages from their ‘trending’ content or flagging their accounts and the messages they repost.

Anyway, technological and automated solutions might provide only a partial solution to this problem, as they would never prove the authenticity of a piece of news as a whole and focus mostly on distribution patterns (Huckle & White, 2017). Encouraging collaborative and social evaluation of the sources and pieces of news distributed could be an additional feature to be implemented.

Collaborative and social evaluation means that the online community can rate and evaluate the accuracy and quality of the pieces of news they find. This rating can range from calculating the average rate of the quality of the content through votes of the users (as happens with TripAdvisor reviews or Google Ratings) from the possibility to ‘flag’ or warn individually about misleading on inaccurate content. On those cases, when several warnings are detected, the platform includes a label or text indicating that there are doubts about the factuality of the content.

Another solution related for the problem of the quality of the information providers could be that social media sites delivered and regulated ‘badges’ or other graphical element to identify content linked to quality news providers and so, at least warn issues about content delivered or fabricated in other venues or other companies. This ‘badge’ could be related to the common practice of assigning a ‘blue tick’ to celebrities as a way to stablish the authenticity of the profile and to clearly assure the identity of the person behind that account.

This badge could be assigned to media which could prove that A), most of their content is news about current events and socially and civic a socially relevant information, B) most of their staff are journalist with a University degree on Communication, C) A high percentage of their news (more than 99%) are proven to be fact based and accurate.

This badge could also adopt a system of levels, in order to stablish ‘green sites’ (which cover all of the three aspects mentioned in the above paragraph), ‘yellow sites (which cover two of the mentioned criteria), red sites (which cover one criterion) and black sites (which cover no criterion or offer no information about this issue).

4.2.2. Diversity of sources, topics and views

Social media companies tend to argue that personalization of the content offered to their users is a core feature of their business model, but research shows that personalization of content is compatible with bringing a wider diversity of topics to the final users (Möller et al, 2018). Nevertheless, algorithms can be designed and implemented to encourage plurality and diversity of views, attitudes and opinions (Bozdag & van den Hoven, 2016). This makes possible to design algorithms which foster public service and civic information to social media users.

As an ideal, companies should take some outer evaluation and auditing in order to determine that their algorithms are not biased and foster plurality or diversity of facts, points of views and opinions. Yet, as stated in the resolution ‘Internet and politics: the impact of new information and communication technology on democracy’, Brasseur (2014), those algorithms lack transparency enough to be evaluated or analyzed. But this reality should not stop their outcome to be evaluated. Tests could be made in order to detect the kind of content which each algorithm filters and select, and the kind of media content which appears on the users news feed.

Even though that there are no mechanism to make this recommendation mandatory, a ‘Seal of Good Practices’ could be awarded to social media providers which follow these guidelines for encouraging and fostering the selection of plural content which makes the ideologically cross-cutting exposure which positive outbacks were discussed on the introduction of this report.

4.3. Privacy
4.3.1. User consent and privacy settings

As mentioned above, one of the main problems with privacy is that users lack in fact real knowledge about the information which social media services collect on them and which purposes do that data collection serve to. Council of Europe and European Union legislation have already stated that information provided to users of these platforms should be concise, transparent, intelligible and easily accessible. That means that versions of those terms and conditions clearly understandable should be encouraged.

According to the General Data Protection Regulation of the European Union (Art. 30), users are entitled to get the following information (among others):

  1. purposes of the data processing;

  2. description of the categories of data subjects and of the categories of personal data related to the processing;

  3. information on the categories of recipients to whom personal data have been, or will be, disclosed;

  4. information on whether transfers of personal data to third countries or international organizations have been, or will be, carried out.

One of the possible options to improve the readability of these terms and conditions could be the elaboration of visual based summaries of the information listed on those legal documents, which has been proved to guarantee better understandability of complex information. Fox & Royne (2018) propose that companies should adopt privacy policies presented in the form of ‘nutritional labels’ and that the information could be summarized in a table, opposed to a paragraph, to ensure readability.

That ‘label’ should give answer to, at least, the following information: Who can see what I post?; What is going to be known about me?; Which data are you going to collect about me?; What are you going to do with my data?; What are you going to do with my content? ; Who can contact or reach me?

Ideally, users should not only be able to get this information, but also to regulate and adapt the answers to those questions when using a social networking site. In the social media context privacy needs to be understood not only from an individual perspective, but also as a networked and negotiated practice (Marwick & boyd, 2014).

In this sense, it could be asked that privacy settings are set always by default in the highest restriction level. Most of users do not change ever those settings, so Social Media companies set the lowest restriction level in order to collect the maximum amount of information as possible. Changing and making mandatory to set up those conditions on the most restrictive settings would mean highest protection for every user not only those with highest digital skills or previous concerns about privacy.

Article 17 of the GDPR gives effect to data subjects’ requests to have data erased or deleted. That means that users should have access to ask for information to be deleted not only from public posts but also from the platform provider servers. This implies that if once I posted information about me that I no longer want it to be known, the platform should also erase that information from its servers and no further process it or include that information on the users’s profile and aggregated information. There should be no distinction between ‘visible’ information and ‘invisible information’. That would also stop companies like Facebook from collecting users activity in other webpages while the social networking site is open in a different browser tab.

4.3.2. Data profiling

User should have the right to oversee, evaluate and –ideally- refute this profiling. Once again, the opacity of the social media platforms algorithms makes this task a difficult mission. But anyway, governments could encourage said companies to include a privacy feature where the user can check all the ‘micro categories’ he has been labelled into and choose, if he wishes to, which categories he can be removed from.

Another solution for microtargeting advertising should be to include a feature in promoted publications (that is, paid or advertised) and organical reach publications (the ones seen by the user outside any promotional campaign). These feature (which could be baptized as ‘Why am I seeing this’) should provide the user all the information which has been used to offer him that post or piece of content. This feature should also let the user to ‘delete’ or ask any information or data the platform is using to filter and promote content according the data it possesses from the user.

For example, if one given user is seeing an advertising about ‘animal adoption’ he or she should be able to ask for the reasons that post is appearing in his or her newsfeed. The ‘why I am seeing this?’ feature then should list the information upon those filtering and promotion has been based and the ‘categories’ -if any- the user appears or the ‘categories’ the advertiser looked for. In this case, if one of the reasons was that the user is listed as ‘vegan’ because of his activity posting vegetarian recipes, that reason should be clearly stated.

Following the right to restriction of processing, users are entitled to temporarily restrict the processing of their information for a lapse of time. It could be understood as well that citizens have the right to restrict the kind of information that is being processed about them as well. Accordingly, this feature should appear in an accessible way, ideally like a button in every post, not placed in remote, hidden or dark positions in the ‘privacy settings’ options, so that it can be checked by virtually every user, not only by those with a higher degree of digital literacy or with previous higher concerns about his or her privacy.

Discussion

Privacy has been one of the most researched topics in social media research in the last decade ( Liu et al., 2017; Stoycheff, Liu, Wibowo, & Nanni, 2017). Privacy concerns are one of the more common and influental stress triggers for social media users (Fox & Moreland, 2015). Also, privacy issues and their statements are one of the main deficiencies in social media companies’ corporate social responsibility (Bauer, 2014). Adopting the proposals suggested in the conclusion section might help users to have richer full and fruitful experiences, and also to improve social media companies image and reputation.

Echo chambers and viral misinformation have been correlated (Törnberg, 2018). A failure of providing factual information and cross-cutting perspectives could severely damage the democratic quality of contemporary societies (Gil De Zúñiga, Huber, & Strauß, 2018). Given the role that users play in disseminating content through these platforms (Noguera-Vivo, 2018) it should be remarked another good practice suggested by Pariser (2012): social media companies should widen the range of the ‘reactions buttons’ as Facebook made some years ago when it included ‘Love’, ‘Wow’ or ‘Sad’ reactions. In order to encourage and gain visibility for relevant issues but with low emotional content, an ‘Important’ button could be introduced. Thus, this measure would enhance the reach of relevant content and make it stand above the irrelevant and meaningless content shared by emotional triggers.

Last, freedom of speech is the less mentioned freedom in the literature regarding social media, and when appears, it is dealt from the perspective of the boundaries of such freedom (Tsesis, 2017a; Tsesis, 2017b) . Even though expressing personal opinions on public matters on the Internet help citizens and organizations to find legitimacy in the social discourse (Jöuet, 2009) and that Social Media are becoming a new public sphere (Masip et al., 2019), literature shows a gap in this aspect. Despite our proposals to guarantee this fundamental right, further research is needed in this regard.

Supplementary material
Agur, C. & Frisch, N. (2019). Digital Disobedience and the Limits of Persuasion: Social Media Activism in Hong Kong’s 2014 Umbrella Movement. Social Media + Society, 5(1), 205630511982700. http://doi.org/10.1177/2056305119827002
Allcott, H. & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236. http://doi.org/10.1257/jep.31.2.211
Angwin, J. (2017). “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children.” ProPublica. Retrieved from: https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms
Anspach, N. M., Jennings, J. T. & Arceneaux, K. (2019). A little bit of knowledge: Facebook’s News Feed and self-perceptions of knowledge. Research & Politics, 6(1), 205316801881618. http://doi.org/10.1177/2053168018816189
Ardèvol-Abreu, A. & Gil de Zúñiga, H. (2017). Effects of Editorial Media Bias Perception and Media Trust on the Use of Traditional, Citizen, and Social Media News. Journalism & Mass Communication Quarterly, 94(3), 703–724. http://doi.org/10.1177/1077699016654684
Bakshy, E., Messing, S. & Adamic, L. A. (2015). “Exposure to ideologically diverse news and opinion on Facebook”. Science, 348 (6239), 1130-1132. Retrieved from: http://science.sciencemag.org/content/348/6239/1130
Bauer, T. (2014). The Responsibilities of Social Networking Companies: Applying Political CSR Theory to Google, Facebook and Twitter. Emerald insight. Discover Journals, Books & Case Studies, (pp. 259–282). http://doi.org/10.1108/S2043-9059(2014)0000006005
Bekafigo, M. A. & McBride, A. (2013). Who Tweets About Politics? Social Science Computer Review, 31(5), 625–643. http://doi.org/10.1177/0894439313490405
Bozdag, E. (2013). “Bias in algorithmic filtering and personalization”. Ethics and information technology, 15 (3), 209-227. Retrieved from: https://link.springer.com/article/10.1007/s10676-013-9321-6
Bozdag, E. & van den Hoven, J. (2015). “Breaking the filter bubble: democracy and design”. Ethics and Information Technology, 17 (4), 249-265. Retrieved from: https://link.springer.com/article/10.1007/s10676-015-9380-y
Boyd, D. M. & Ellison, N. B. (2010). Social network sites: definition, history, and scholarship. IEEE Engineering Management Review, 38(3).
Brasseur, A. (2014). Internet and politics: the impact of new information and communication technology on democracy Report | Doc. 13386. Parliamentary Assembly – Council of Europe. Committee on Culture, Science, Education and Media. Retrieved from: http://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=20329&lang=en
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. http://doi.org/10.1177/1461444812440159
Carlson, M. (2016). Embedded Links, Embedded Meanings. Social media commentary and news sharing as mundane media criticism. Journalism Studies, 17(7), 915–924. http://doi.org/10.1080/1461670X.2016.1169210
Chadwick, A., Vaccari, C. & O’Loughlin, B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media and Society, 20(11), 4255–4274. http://doi.org/10.1177/1461444818769689
Cohen, J. N. (2018). Exploring Echo-Systems: How Algorithms Shape Immersive Media Environments. Journal of Media Literacy Education, 10(2), 139–151.
Conroy, N. J., Rubin, V. L. & Chen, Y. (2016). “Automatic deception detection: Methods for finding fake news”. Proceedings of the Association for Information Science and Technology, 52(1), 1-4. Retrieved from: https://dl.acm.org/citation.cfm?id=2857152http://doi.org/1932–8036/20180005
Díaz-Campo, J. & Segado-Boj, F. (2014). La adaptación de los códigos de ética periodística europeos a Internet y las TIC. Ámbitos, 16. Disponible en: https://idus.us.es/xmlui/handle/11441/66656
Dubois, E. & Blank, G. (2018a). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. http://doi.org/10.1080/1369118X.2018.1428656
Dubois, E. & Blank, G. (2018b). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. http://doi.org/10.1080/1369118X.2018.1428656
Ellison, N. B., Steinfield, C. & Lampe, C. (2007). “The benefits of Facebook “friends:” Social capital and college students’ use of online social network sites”. Journal of Computer & Mediated Communication, 12 (4), 1143-1168. https://doi.org/10.1111/j.1083-6101.2007.00367.x
Fletcher, R. & Nielsen, R. K. (2018). Are people incidentally exposed to news on social media? A comparative analysis. New Media and Society, 20(7), 2450–2468. http://doi.org/10.1177/1461444817724170
Fox, J. & Moreland, J. J. (2015). The dark side of social networking sites: An exploration of the relational and psychological stressors associated with Facebook use and affordances. Computers in Human Behavior, 45, 168–176. http://doi.org/10.1016/J.CHB.2014.11.083
Fox, A. K. & Royne, M. B. (2018). “Private information in a social world: assessing consumers´ fear and understanding of social media privacy”. Journal of Marketing Theory and Practice, 26 (1-2), 72-89. https://doi.org/10.1080/10696679.2017.1389242
Gil De Zúñiga, H., Huber, B. & Strauß, N. (2018). Social Media and Democracy. El Profesional de La Información, 27(6), 1172–1182. https://doi.org/10.3145/epi.2018.nov.01
Gil de Zúñiga, H., Jung, N. & Valenzuela, S. (2012). Social media use for news and individuals’ social capital, civic engagement and political participation. Journal of Computer-Mediated Communication, 17 (3), 319-336. https://doi.org/10.1111/j.1083-6101.2012.01574.x
Golbeck, J. (2013). Facebook wants to know why you didn’t publish that status update you started writing. Future tense. Retrieved from: https://slate.com/technology/2013/12/facebook-self-censorship-what-happens-to-the-posts-you-dont-publish.html
Groot Kormelink, T. & Costera Meijer, I. (2019). Material and sensory dimensions of everyday news use. Media, Culture & Society, 016344371881091. http://doi.org/10.1177/0163443718810910
Guallar, J., Suau, J., Ruiz-Caballero, C., Sáez, A. & Masip, P. (2016). Re-dissemination of news and public debate on social networks, 25(3), 1699–2407. http://doi.org/10.3145/epi.2016.may.05
Referencias
Huckle, S. & White, M. (2017). “Fake news: a technological approach to proving the origins of content, using blockchains”. Big data, 5 (4), 356-371. https://doi.org/10.1089/big.2017.0071
Jöuet, J. (2009). The Internet as a New Civic Form. Javnost - The Public, 16(1), 59–72. http://doi.org/10.1080/13183222.2009.11008998
Just, N. & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture and Society, 39(2), 238–258. http://doi.org/10.1177/0163443716643157
Kalogeropoulos, A., Fletcher, R. & Nielsen, R. K. (2018). News brand attribution in distributed environments: Do people know where they get their news? New Media & Society, 146144481880131. http://doi.org/10.1177/1461444818801313
Knight, W. (2017). Google advierte: el verdadero peligro de la IA no son los robots asesinos sino los algoritmos sesgados. MIT Technology Review. Retrieved from https://www.technologyreview.es/s/9610/google-advierte-el-verdadero-peligro-de-la-ia-no-son-los-robots-asesinos-sino-los-algoritmos
Lazer, D. M. et al (2018). The science of fake news. Science, 359 (6380), 1094-1096.
Leite, R. D. A. & Cardoso, G. S. (2015). The Arbitrariness of Censorship Parameters on Facebook and the Prohibition of Femen’s Page. Artemis, 19, 137-143.
Lin, J. H. T. (2019). Strategic Social Grooming: Emergent Social Grooming Styles on Facebook, Social Capital and Well-Being. Journal of Computer-Mediated Communication, 24(3), 90–107. http://doi.org/10.1093/jcmc/zmz002
Liu, J. S., Ho, M. H. C., Lu, L. Y. Y., Sung, Y., Provetti, A. & Christakis, D. (2017). Recent Themes in Social Networking Service Research. PLOS ONE, 12(1), e0170293. http://doi.org/10.1371/journal.pone.0170293
Martin, R. W. (2001). The Free and Open Press: The Founding of American Democratic Press Liberty. NYU Press.
Marwick, A. E. & Boyd,D . (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, 16(7), 1051–1067. http://doi.org/10.1177/1461444814543995
Masip, P., Guallar, J., Suau, J., Ruiz-Caballero, C. & Peralta, M. (2015). News and social networks: audience behavior. El Profesional de La Información, 24(4), 363–370. http://doi.org/10.3145/epi.2015.jul.02
Masip, P., Ruiz-Caballero, C. & Suau, J. (2019). Active audiences and social discussion on the digital public sphere. Review article. El Profesional de La Información, 28(2). http://doi.org/10.3145/epi.2019.mar.04
Masip, P., Suau-Martínez, J. & Ruiz-Caballero, C. (2017). Questioning the Selective Exposure to News: Understanding the Impact of Social Networks on Political News Consumption. American Behavioral Scientist, 000276421770858. http://doi.org/10.1177/0002764217708586
Matsa, K. E. & Shearer, E. (2018). News Use Across Social Media Platforms 2018. Retrieved March 4, 2019, from http://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/
Möller, J., Trilling, D., Helberger, N. & van Es, B. (2018). “Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity”. Information, Communication & Society, 21 (7), 959-977. Retrieved from: https://www.tandfonline.com/doi/abs/10.1080/1369118X.2018.1444076
Müller, P., Schneiders, P. & Schäfer, S. (2016). Appetizer or main dish? Explaining the use of Facebook news posts as a substitute for other news sources. Computers in Human Behavior, 65, 431–441. http://doi.org/10.1016/j.chb.2016.09.003
Mundt, M., Ross, K. & Burnett, C. M. (2018). Scaling Social Movements Through Social Media: The Case of Black Lives Matter. Social Media + Society, 4(4), 205630511880791. http://doi.org/10.1177/2056305118807911
Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L. & Nielsen, R. K. (2018). Digital News Report. Retrieved from http://media.digitalnewsreport.org/wp-content/uploads/2018/06/digital-news-report-2018.pdf?x89475
Noguera-Vivo, J. M. (2018). You get what you give : Sharing as a new radical challenge for journalism. Communication & Society, 31(4), 147–158. http://doi.org/10.15581/003.31.4.147-158
Number of monthly active Facebook users worldwide as of 1st quarter 2019 (in millions). (2019). Retrieved June 24, 2019, from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/
Pariser, E. (2012). The Filter Bubble. London: Viking.
Phua, J., Jin, S. V. & Kim, J. (Jay). (2017). Uses and gratifications of social networking sites for bridging and bonding social capital: A comparison of Facebook, Twitter, Instagram, and Snapchat. Computers in Human Behavior, 72, 115–122. http://doi.org/10.1016/j.chb.2017.02.041
Plantin, J. C., Lagoze, C., Edwards, P. N. & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. http://doi.org/10.1177/1461444816661553
Pollach, I. (2007), “What’s Wrong with Online Privacy Policies?” Communications of the ACM, 50 (9), 103–108. Retrieved from: https://www.cs.stevens.edu/~nicolosi/classes/17fa-cs578/ref3-2.pdf
Rihter, A. (2011). “The protection of privacy and personal data on the Internet and online media report”. Doc 12695. Parliamentary Assembly – Council of Europe. Former Committee on Culture, Science and Education. Retrieved from: http://assembly.coe.int/nw/xml/XRef/Xref-DocDetails-en.asp?FileID=13151&lang=en
Said-Hung, E. & Segado-Boj, F. (2018). Social Media Mobilization in Venezuela: A Case Study. Social and Economic Studies, 67 (4), 235-259
Segado, F., Grandío, M. M. & Fernández-Gómez, E. (2015). Social media and television: a bibliographic review on the Web of Science. El profesional de la información, 24(3), 227-234. http://dx.doi.org/10.3145/epi.2015.may.02
Stoycheff, E., Liu, J., Wibowo, K. A. & Nanni, D. P. (2017). What have we learned about social media by studying Facebook? A decade in review. New Media & Society, 19(6), 968–980. http://doi.org/10.1177/1461444817695745
Sunstein, C. (1995). Democracy and the problem of free speech. Publishing Research Quarterly, 11(4), 58-72.
Sunstein, C. R. (2009). Republic 2.0 (Princeton). Princeton, N
Tandoc Jr, E. C., Lim, Z. W. & Ling, R. (2018). “Defining “Fake News” A typology of scholarly definitions”. Digital Journalism, 6 (2), 137-153. https://doi.org/10.1080/21670811.2017.1360143
Thurman, N., Moeller, J., Helberger, N. & Trilling, D. (2018). My Friends , Editors , Algorithms , and I : Examining audience attitudes to news selection Examining audience attitudes to news selection. Digital Journalism, 0(0), 1–23. http://doi.org/10.1080/21670811.2018.1493936
Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLOS ONE, 13(9), e0203958. http://doi.org/10.1371/journal.pone.0203958
Tsesis, A. (2017a). “Social Media Accountability for Terrorist Propaganda”. Fordham L. Rev., 86, 605. Retrieved from: https://heinonline.org/HOL/LandingPage?handle=hein.journals/flr86&div=28&id=&page=
Tsesis, A. (2017b). “Terrorist speech on social media”. Vand. L. Rev., 70, 651. Retrieved from: https://heinonline.org/HOL/LandingPage?handle=hein.journals/vanlr70&div=17&id=&page=
Tufekci, Z. & Wilson, C. (2012). Social Media and the Decision to Participate in Political Protest: Observations From Tahrir Square. Journal of Communication, 62(2), 363–379. http://doi.org/10.1111/j.1460-2466.2012.01629.x
Wall, M. (2015). Citizen Journalism: A retrospective on what we know, an agenda for what we don’t. Digital Journalism, 3(6), 797–813. http://doi.org/10.1080/21670811.2014.1002513
Wang, Y. & Kosinski, M. (2018). “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images”. Journal of personality and social psychology, 114 (2), 246. Retrieved from: https://osf.io/hv28a/download/?format=pdf
Notes
Buscar:
Contexto
Descargar
Todas
Imágenes
Scientific article viewer generated from XML JATS4R by Redalyc