![]()
Desafección política y toxicidad en X: una propuesta de clasificación en hashtags electorales
Desafeição política e toxicidade em X: uma proposta de classificação em hashtags eleitorais
1 Autonomous University of Bucaramanga, Colombia
2 Autonomous University of Bucaramanga, Colombia
* Professor and Researcher, Social Communication Program, Autonomous University of Bucaramanga, Colombia. Email: ybriceno@unab.edu.co
** Director, Smart Regions Technological Development Center, Autonomous University of Bucaramanga, Colombia. Email: mcalderon@unab.edu.co
Received: 12/12/2024; Revised: 16/12/2024; Accepted: 07/03/2025; Published: 19/09/2025
Translation to English: Enago.
To cite this article: Briceño-Romero, Ysabel; & Calderón-Benavides, Liliana (2025). Political Disaffection and Toxicity on X: A Proposal to Classify Electoral Hashtags. ICONO 14. Scientific Journal of Communication and Emerging Technologies, 23(1): e2234. https://doi.org/10.7195/ri14.v23i1.2234
Abstract
This study presents an exploratory review aimed at identifying political disaffection in discourse on Twitter (now X) in electoral contexts, based on the toxicity classifications provided by the Perspective tool. This pilot study adopts a predominantly quantitative and descriptive approach to examine the trend of messages classified as toxic under the hashtag #Elecciones2022 (#2022election). The analysis is based on a dataset of 115,493 tweets referencing electoral contexts in Costa Rica, Colombia, Spain, Mexico, and Peru. The corpus underwent an automated toxicity score assigning process using the Perspective tool, followed by a manual classification of a subsample based on categories associated with political disaffection. The discussion suggests that, in this case, the discursive classification of messages exhibiting severe toxicity is an effective method for identifying political disaffection. This is based on the presence of linguistic evidence reflecting explicit negative sentiments, and, in most instances, a clearly identifiable target of discontent. However, the variation in sample sizes across different national contexts highlights the need to replicate the methodology to draw more conclusive findings. A long-term longitudinal analysis would facilitate a deeper understanding of the political culture within specific contexts, providing a foundation for future research to generate new insights and address existing gaps in the identification of political disaffection in digital discourse.
Keywords
Political disaffection; Political communication; Hashtag; Elections; Democracy; Digital discourse.
Resumen
Este artículo resume una revisión exploratoria para la detección discursiva de desafección política en Twitter (ahora X) en contextos electorales, partiendo de la clasificación de toxicidad que ofrece la herramienta Perspective. En esta experiencia piloto se aborda, desde un enfoque predominante cuantitativo y descriptivo, la tendencia de mensajes considerados tóxicos dentro de la etiqueta #Elecciones2022 en un universo de 115.493 tuits descargados, cuyos contenidos aluden a contextos relacionados con Costa Rica, Colombia, España, México y Perú. El corpus fue sometido a una puntuación automatizada de toxicidad con la herramienta Perspective y luego se realizó una clasificación manual a los contenidos de una submuestra, según categorías relacionadas con la desafección política. La discusión sugiere que la tipificación discursiva de mensajes con toxicidad severa es eficiente en este caso para la detección de la desafección política, entendiendo que existen evidencias lingüísticas de sentimientos negativos explícitos y, en su mayoría, se reconoce el centro sobre el cual recae el malestar, aunque la diferencia de la muestra en cada contexto país exige réplicas de la metodología para ideas concluyentes. Una revisión longitudinal a largo plazo podría contribuir a la comprensión de la cultura política de contextos específicos, aspecto que puede ser aprovechado en futuras investigaciones para empezar a llenar vacíos con nuevas miradas para la detección de la desafección política en discursos digitales.
Palabras clave
Desafección política; Comunicación política; Hashtag; Elecciones; Democracia; Discurso digital.
Resumo
Este artigo resume um percurso conceptual e metodológico para a deteção discursiva do descontentamento político no Twitter (agora X) em contextos eleitorais, a partir da classificação da toxicidade. Nesta experiência piloto, a partir de uma abordagem predominantemente quantitativa, analisa-se a tendência das mensagens consideradas tóxicas dentro do rótulo #Elecciones2022 num universo de 115.493 tweets descarregados, cujos conteúdos se referem a contextos relacionados com a Costa Rica, Colômbia, Espanha, México e Peru. O corpus foi submetido a um score de toxicidade automatizado com a ferramenta Perspective e de seguida foi realizada uma classificação manual do conteúdo de uma amostra, de acordo com o tipo de desafeto. Entendido como um quadro discursivo emocional e negativo em relação ao sistema político, este caso propõe categorias de análise como o descontentamento em relação aos atores políticos, às instituições democráticas, aos traços culturais ou à própria democracia, no meio de conversas em espanhol em X, contextualizadas em processos eleitorais. A discussão sugere o potencial das redes sociais Uma revisão longitudinal de longo prazo poderá contribuir para a compreensão da cultura política de contextos específicos, aspeto que poderá ser utilizado em futuras pesquisas para começar a preencher lacunas com novas perspetivas sobre abordagens como a espiral do cinismo, com os eixos: construção de notícias digitais, ódio, toxicidade e descontentamento.
Palavras-chave
Insatisfação política; Comunicação política; Hashtag; Eleições; Democracia; Discurso digital.
Political disaffection is understood as a subjective state of discontent that can manifest across three possible dimensions: the political process, political stakeholders, and democratic institutions (Torcal & Montero, 2006). A high level of political disaffection, caused by elevated levels of corruption and dissatisfaction with the government, often giving rise to new political movements that convert public discontent into electoral capital.
Given the potential impact of political disaffection—as a negative sentiment—on support for democracy, continuous efforts have been made since the mid-20th century to measure it, primarily through surveys, interviews, and focus groups. However, with the rise of social media, certain studies have sought to explore the expression of political culture in digital environments, particularly by identifying sentiments—both negative and positive—toward democracy as a means of interpreting discourse on platforms such as Twitter (Briceño-Romero et al., 2022; Arcila-Calderón et al., 2022). This shift is driven by the potential influence of electoral contexts in stimulating the publication of content that reflects the dimensions associated with political disaffection. Within the framework of "citizen sociolinguistics," authors argue that the emerging digital language in social media could offer a glimpse into some historically relevant sociocultural beliefs and attitudes within sociopolitical discourse (Bridges, 2021). This study serves to contribute to these ongoing efforts.
It is therefore proposed that political disaffection could be identifiable in the publication of messages qualified as toxic in electoral contexts, given advancements in automated classification tools that enhance linguistic recognition, focusing on dissatisfaction in certain sociopolitical contexts. The starting point of this research is thus the potential of discourse to express political disaffection, insofar as linguistic constructions identifiable as negative toward any of the dimensions (democratic institutions, political actors, or democracy itself) are generated on Twitter (X), given the significant role of language in conveying sentiment.
Building on advancements in machine learning to identify negative online content, we suggest exploring the classification of messages tagged with terms related to electoral processes in Spanish on the social network X, using Perspective, an open-source tool developed by Jigsaw (incubator within Google) designed to classify discourse based on its potential toxicity in a conversation.
Accordingly, the following research questions were developed:
- What is the trend of toxicity in content associated with the #Elecciones2022 hashtag?
- How feasible is it to recognize political disaffection in linguistic content classified as severely toxic by the Perspective tool?
Within this socio-technical context, the study proposed a pilot test to explore the identification of political disaffection in messages regarded as severely toxic, using the hashtag #Elecciones2022. The majority of the content, written in Spanish, pertains to the electoral contexts of Colombia, Costa Rica, Spain, and Mexico in 2022.
The discourse on democracy in the 21st century has centered around citizens’ rejection of political stakeholders and institutions, which is seen as a potential threat to democratic stability (PNUD, 2022). According to Torcal and Montero (2006), "...citizens seem to have become even more critical of the way democracy operates, the performance of political institutions and the daily activities of political stakeholders" (p. 4), which triggers a paradoxical defense of democracy, but with high levels of discontent toward its institutions. The latter situation has been studied under the notion of political disaffection.
As part of a dimension within the braoder concept of political culture (Almond & Verba, 1963) or in relation to the levels of satisfaction with democracy (Valgarðsson & Devine, 2022), political disaffection reflects citizens’ expectations of the political system. It is typically expressed as a negative sentiment, often manifesting as feelings of powerlessness, cynicism, lack of trust (Torcal & Montero, 2006), and even anger (PNUD, 2022). Although political disaffection has been approached from different perspectives, it is agreed upon to identify it as a subjective condition of dissatisfaction with three possible dimensions: the political process, political actors, and democratic institutions (Torcal & Montero, 2006). Another definition, proposed by Megías Collado and Moreno (2022), describes political disaffection as a negative sentiment toward politicians, political processes, and a system perceived as incapable of addressing citizens’ needs and demands.
Political disaffection refers to a subjective feeling that is often closely tied to contextual factors and can be understood as an additional element of political culture (Almond & Verba, 1963). Therefore, it could be explained as a series of codes gradually and structurally embedded within the sociopolitical environment. However, the rapid flow of information could help generate conjunctural frameworks of the context, potentially altering prevailing trends, as advocated by Megías Collado (2020).
The measurement of political disaffection has been widely discussed and studied, though not always with consensus, primarily due to the challenges of accounting for diverse contextual factors. Nonetheless, various methodological approaches have contributed to broadening the analytical perspective. The levels of trust in democracy and political disaffection have fluctuated over time. Fontaneda and Sánchez-Vítores (2018) assess the complexity and diversity of the approaches used to study political disaffection. The development of the subject has led to various strategies for addressing citizens’ political attitudes toward democratic institutions and values. The phrasing of questions and methods of engaging with the population have become more nuanced, leading to the emergence of sub-indices of political detachment and institutional disaffection. These sub-indices serve as operational expressions of political disaffection (Megías Collado & Moreno, 2022) within a negative scale of satisfaction with democracy (Valgarðsson & Devine, 2022) or political trust and its associated scales (Marien, 2011). Table 1 summarizes the efforts to address the measurement of the issue in surveys.
Table 1. Summary of different ways of dealing with political (dis)trust and (dis)affection, based on the sources provided
Variable |
Technique |
Question |
Focus/Indicator |
Organization / Source |
political (dis)trust |
Survey (Likert scale) |
Do you think you can trust the Washington government to do the right thing? Is the government wasting a lot of money? Is the government controlled by a small number of large, self-serving interests? Are most politicians corrupt? Do politicians know what they are doing? |
Political stakeholders |
American National Election Studies (ANES) 1960s |
political (dis)trust |
Survey (numerical scale) |
On a scale of one to ten, how much do you trust...? The Parliament The legal system The armed forces Political parties Candidates, individuals who exercise political power. |
Institutions Political stakeholders |
European Social Survey (ESS) Eurobarometer |
political (dis)trust |
Survey (Likert scale) |
To what extent do you respect the political institutions of (country)? To what extent do you think that the basic rights of citizens are well protected by the political system of (country)? To what extent are you proud to live under the political system of (country)? |
Institutions Country culture |
|
Political system preference |
Survey (premises with Likert scale response) |
With which of the following phrases do you agree the most? "Democracy is more desirable than any other form of government;" "Under certain circumstances, an authoritarian government may be preferable to a democratic one;" "People like me, we don’t care if a regime is democratic undemocratic;" |
- Democracy support - Indifference to democracy - Support to authoritarianism |
|
Political disaffection |
Survey (Likert scale responses) |
How interested would you say you are in politics? "How often does politics seem so complicated that you struggle to understand what is going on?" |
Political disinterest |
|
Political disaffection |
Survey (Likert scale responses) |
Do you believe that politicians generally care about the opinions of people like you? |
Disaffection with political stakeholders |
|
Political disaffection |
Survey (responses based on a numerical scale) |
On a scale of 0–10, how much do you personally trust each of the institutions I mention? 0 means you do not trust the institution at all, whereas 10 means you have complete trust in it. |
Institutional disaffection |
Source: own creation, based on various authors.
One of the actions fostered by written discourse within social media communities is the publication of toxic messages. This phenomenon has sparked scholarly reflection and led to agreements and policy decisions—both public and organizational—aimed at addressing this form of discursive expression, which is distributed and decentralized within digital environments. By the end of the 20th century, a complex construction of public opinion emerged, shaped by premises linking media coverage—often negative—of public affairs to growing public distrust and disengagement from politics and government. This development reinforced hypotheses surrounding the so-called “spiral of cynicism” (Capella & Jamieson Hall, 1997). However, in light of the distributed nature of information and the emergence of social media, these approaches fall short in addressing the relationship between news framing and hate speech in digital environments.
Hate speech is commonly understood as "direct or indirect, verbal or physical violence involving clearly singling out a person or group of people because of what they represent in society" (Cucurull & Navarro, 2023). In the digital environment, this negative expression is recognized as verbal misconduct and toxicity on online social networks (OSN) by Qayyum et al. (2023).
In an effort to address the challenges arising from toxic content on social media, the use of computational methods has advanced its detection on digital platforms, within academic research and among the major organizations that manage these environments.
Recent automated discourse analyses on X have focused on negative content targeting specific populations, including topics such as migration (Arcila-Calderón et al., 2020; Arcila-Calderón et al., 2022); gender bias and feminism (Piñeiro-Otero & Martínez-Rolán, 2021); and the pandemic context (Raghad et al., 2020). Within the realm of political discourse, notable studies have focused on ideology (Amores et al., 2021; Díez-Gutiérrez et al., 2022) and partisan polarization during election periods, as seen in the United States (Grimminger & Klinger, 2021) and Spain (Herrero Izquierdo et al., 2022). Bollen et al. (2021) proposed a large-scale analysis of six mood states on X (tension, depression, anger, vigor, fatigue, confusion) to model predictive emotional trends associated with different sociopolitical contexts. In their study, Horta Ribeiro et al. (2018) examined the characteristics of "hateful users."
Among the emerging tools for identifying negative speech on Twitter is the open-source, Google-affiliated Perspective API, which uses machine learning to detect text constructions classified as potentially toxic.
Perspective is a supervised rating system for developed text that generates probability scores ranging from 0 to 1 for attributes such as severe toxicity, insult, profanity, identity attack, threats, and sexually explicit content in multiple languages, including Spanish. The severe toxicity category has made the most significant contribution to defining hate speech and is one of the best-validated models for labeling abusive or hateful content (Lees et al., 2022). Given the training results of the toxicity measure, media outlets such as the New York Times and El País have been applying this tool to identify and moderate comments (For more details on this end: http://perspectiveapi.com/case-studies/).
As explained on the tool’s official website, toxicity is one of the measuring attributes of online hate, understood as "a rude, disrespectful or unreasonable comment that is likely to make you withdraw from a discussion.1" For automated discourse classification, the Perspective API guidelines recommend setting thresholds between 0.7 and 0.9, the highest scores, to classify severe toxicity in social science research, with scores ranging from 0 to 1.
The Perspective API has begun to be explored in academic research. Initially, efforts have focused on demonstrating the vulnerability of the tool in recognizing potentially toxic phrases, particularly in cases of semantic confusion or keyword modifications, commonly referred to as "adversarial examples" (Hosseini et al., 2017; Jain et al., 2018). This limitation continues to be one of the challenges in automated content classification, and recognizing it plays a crucial role in developing more robust methods to improve the accuracy of toxic comment classifiers, such as Perspective (Lees et al., 2022). From a longitudinal approach, more recent studies have recognized the tool’s potential in identifying toxic profiles over time, which could contribute to improving future discourses on X, as concluded by Qayyun et al. (2023).
Moreover, recent efforts have also explored methodologies to interpret the digital environment, with the aim of identifying expressions of political culture by classifying content that shows support for or disaffection toward democracy on Twitter (Briceño-Romero et al., 2022; Arcila-Calderón et al., 2022). Finally, this research suggests that political disaffection may be recognized as a form of severe toxicity, as identified by the Perspective tool. This could serve as a potential method for classifying discourse that rejects political stakeholders and processes within the political system, particularly in hashtags associated with electoral contexts, thereby contributing to the continuity of studies in this area (Keller & Klinger, 2019; Grimminger & Klinger, 2021; Herrero Izquierdo et al., 2022).
Thus, it is acknowledged that political disaffection could potentially be identified on X, as the text constructs a symbolic reality shaped by language and embedded within a sociopolitical context. Within this context, the social meaning of the content is produced through representations of stakeholders, institutions, and dynamics related to democracy itself. Actions and interactions reflect a complex culture of use, increasingly intertwined with dominant algorithmic logics.
Furthermore, hashtags, defined as words preceded by the numeral (#) symbol in content published on X, are recognized as thematic units for potential discursive analysis. Their study, enabled by socio-technical methods for downloading and organizing messages, has been extensively reviewed over the past decade across various topics. (Briceño-Romero & Bravo, 2022). Pano Alamán (2020) identifies several communicative functions of hashtags, including informative, persuasive, argumentative, and expressive functions. These functions arise from the interactions among users engaged in conversations stemming from words on X preceded by the numeral sign (#) and are framed by specific codes shared between the enunciator and the reader of the statement. Reviewing #Elecciones2022 from a comparative approach across different electoral contexts would offer a fresh perspective on this body of research.
In 2022, electoral processes were held on various dates—from February to October—in five Spanish-speaking countries: Colombia, Costa Rica, Spain, Mexico, and Peru.
In Latin America, the electoral calendar for 2022 followed a planned schedule, which proceeded as expected in Costa Rica and Colombia, where presidential elections were held. In Mexico, regional elections were held in six states, in addition to other municipal elections held during the first semester. Finally, Peru held regional and municipal elections in the last quarter of the year, set against the backdrop of an escalating institutional crisis. In Europe, Spain held two elections in 2022, focused on voting for the parliamentary members of the autonomous communities of Castilla y León and Andalucía, amid tensions between left-wing and right-wing parties.
These countries reflect diverse processes of democratic maturation, characterized by a historically fragmented electoral trajectory in the 20th century, though strengthened in the 21st century. All of these countries have relied on a traditional party system; however, in recent years, they have seen the emergence of new political movements that have become part of the electoral landscape. However, in specific contexts, these electoral scenarios in Spanish-speaking countries, where the population made decisions to elect local, regional, and presidential representatives, fostered an exchange around voting as a democratic indicator, involving the participation of relevant stakeholders and institutions. An overview of the elections held in the studied period and context is presented in Table 2.
Table 2. Electoral context in Spanish-speaking countries in 2022
Date |
Country |
Electoral process |
Electoral campaign |
February 6 |
Costa Rica |
Presidential election, 1st round |
Ruling party–opposition Ruling party: Acción Ciudadana (Jorge María Figueroa) Opposition: Social Democratic Progress Party Rodrigo Chaves Robles |
April 3 |
Presidential election, 2nd round |
||
February 13 |
Spain |
Castilla y León Parliament |
Ruling Party (center-left): Spanish Socialist Workers’ Party (PSOE) Opposition (right): Popular Party (PP) Neutral radical left |
June 19 |
Andalucía’s Parliament |
||
March 13 |
Colombia |
Legislative election |
Anti-establishment candidates Political movements and alliances Gustavo Petro, Historic Pact: radical left Rodolfo Hernández Independent candidacy/Anti-Corruption Rulers party: not so right-wing |
May 29 |
Presidential election, 1st round |
||
June 19 |
Presidential election, 2nd round |
||
June 5 |
Mexico |
Regional election (governors, deputies, and city councils) |
Ruling party–opposition Left-wing ruling party: MORENA Traditional opposition parties: Institutional Revolutionary Party (PRI)/center-right National Action Party (PAN)/conservative |
October 2 |
Peru |
Regional and municipal elections |
Institutional crisis Party crisis Regional movements |
Source: own creation
The unit of analysis in this research is the corpus of messages on X with the hashtag #Elecciones2022. These messages were collected using API REST and API STREAMING, through Python libraries and a custom script designed to extract original content (excluding retweets) in Spanish. The data were downloaded from social media platform X using a developer account granted by the company for academic purposes. The download history spanned from February 1 to June 20, 2022, serving as an exploratory period within a cross-sectional design.
The corpus was subject to the following treatment:
1. Characterization of the #Elecciones2022 hashtag, according to:
a. Number of tweets by date/country
b. Number of tweets by country (based on geolocation)
2. Automated classification of messages was conducted using the Perspective tool, with the following reference categories:
a. Toxicity: 0.3 to 0.6.
b. Severe toxicity: 0.7 to 0.9, as the maximum score.
3. Manual classification of messages that were severely toxic, per country:
a. Type of disaffection, based on the focus of the message.
Taking into account the classification of political culture proposed by Briceño-Romero et al. (2022), messages in the severe toxicity category, ranging between 0.7 and 0.9, underwent manual content classification. This classification was based on the concept of political disaffection, with a focus on toxicity, as outlined in Table 3.
Table 3. Categories of analysis for the classification of contents focused on political disaffection
Type of disaffection |
Indicator |
Description |
Example Tweets* |
Political disaffection |
Political stakeholders |
Disaffection focused on political personalities, including candidates, party leaders, movement leaders, and public officials holding significant positions, such as presidents, government officials, mayors, councilors, and congressmen. |
Aquí es donde uno ve de donde viene lo de dejar la mujer en la cocina o cuidando los hijos y el lenguaje soez hacia las trabajadoras sexuales. Viejo conchudo hpta.#Elecciones2022 #PetroPresidente2022@petrogustavo @ingenierorodol @mario_delgado confirmando lo que ya sabíamos; un absoluto pendejø arrastrado que ni a su padrote pudo complacer. Bien hecho, batracio!!! Ni con el aparato gubernamental e ilícitos pudiste. #loser #Elecciones2022 @PartidoMorenaMx Figueres... populista de mierda!#Elecciones2022#EleccionesCR |
Institutional disaffection |
Public institutions Armed forces Mass media Political parties Church Other |
The discourse focuses on any of the institutions within the democratic system: political parties, legislative bodies (Congress, Senate, councils), executive power (Presidency), judicial power (courts, Attorney General’s Office, Prosecutor’s Office). Mayors’ offices, governors’ offices, police, armed forces, state structures (universities, hospitals), and the media. |
@Registraduria En Colombia co no es un secreto que Uds otra vez se quieren robar las elecciones, por favor no lo hagan; no sean tan HIJUEPUTAS @petrogustavo #EleccionesColombia #Elecciones2022 #Registraduria |
Country culture |
Country identity Population groups |
Aspects inherent to the country’s or region’s identity are highlighted. The text highlights strengths or weaknesses associated with the codes that structurally unite a collective. |
#Elecciones2022Independicen ese hijueputa cagadero ya. Más de 20 años arruinando este país. Paisas setenta hijueputas. Remalparidos pais de mierda no aprende me tienen harta hdp #Elecciones2022 Me cago en todos los que votaron por Rodrigo Chaves carepichas.#eleccionesCR2022 |
Democratic exercise |
Democracy Elections |
Explicit disaffection toward democracy or the values that define it: elections, equality, separation of powers, plurality of parties, freedom of speech, inclusion, participation. |
Fake tweets**: No voy a votar porque no creo en ninguno de esos hjdp ladrones Prefiero una dictadura que esta democracia de mierX&D |
Political idea |
Disaffection with left-wing or right-wing ideologies emerges as the primary focus. |
Andaluces de bien, mañana no olvidéis de meter las papeletas del PP y VOX en el mismo sobre, y que se jodan los comunistas.La unión hace la fuerza!#EleccionesAndalucia2022 #Elecciones2022 #Andalucía |
* The examples are taken from the corpus analyzed, presented as posted.
** No concrete examples were identified in the corpus analyzed within this category.
Source: proposal adapted from Briceño-Romero et al. (2022).
The dataset of messages containing the #Elecciones2022 hashtag on Twitter, collected between February 1 2022 to June 20, 2022, consisted of 115,493 messages.
Considering the publication dates of the messages, the trend of content shared on X with the #Elecciones2022 hashtag showed consistent activity throughout the analyzed period, with notable peaks occurring on key election dates, particularly in: Colombia, Spain, and Mexico.
- The date with the highest reported activity was May 29, with over 35,000 tweets posted. On this date, the second round of presidential elections took place in Colombia.
- Second, a peak in activity was reported on March 13, with over 20,000 tweets. On this date, legislative elections were held in Colombia.
- A third peak occurred on June 19, with nearly 20,000 tweets posted. On this date, parliamentary elections were held in the autonomous community of Andalusia, Spain.
- Finally, a fourth peak in activity occurred on June 5, with over 5,000 tweets. On this date, the regional elections were held in Mexico.
As shown in Figure 1, the hashtag #Elecciones2022 experienced peaks in activity corresponding to electoral events throughout the analyzed period.
Figure 1. #Eleccciones2022 activity trend during the period under study

Source: own creation.
To identify hashtag activity by country, the geolocation data of the accounts was reviewed, resulting in a total of 77,169 geolocated tweets. This subset was then used for subsequent classification. The content distribution explicitly posted from geolocated accounts is as follows: Colombia had the highest number of active users using the analyzed hashtag, with 53,943 tweets, significantly surpassing the other countries. Mexico ranked second, with 16,352 tweets using the hashtag. Costa Rica ranked third, with 3,188 tweets. Peru ranked fourth, with 1,867 tweets, followed by Spain (1,819 tweets). A summary of these data is presented in Table 4.
Table 4. Country-wise usage of the #Eleccciones2022 hashtag on X
Country |
Number of tweets |
Colombia |
53,943 |
Spain |
1819 |
Peru |
1867 |
Costa Rica |
3188 |
Mexico |
16,352 |
Geolocated messages |
77,169 |
Non-geolocated messages |
38,324 |
Total corpus |
115,493 |
Source: own creation.
The messages included in the final sample to identify political disaffection in those classified as highly toxic were those with explicitly stated geolocation information referring to the country under analysis, ensuring that the discussion remained within the reviewed context. Therefore, messages without geolocation information were excluded, as this subset required a separate and complementary analysis beyond the scope of this research. This is reported as a limitation of the analyzed sample.
X users in each country contributed to the publication of #Elecciones2022 content on the key dates of each electoral process, as illustrated in the comparison presented in Figure 2.
Figure 2. Comparison of the #Elecciones2022 hashtag activity by country

Source: own creation.
Based on the Perspective classification, the #Elecciones2022 hashtag exhibited a very low percentage of messages classified as toxic across countries, with scores ranging from 0.3 to 0.9. This preliminary finding suggests that electoral hashtags predominantly focus on aspects related to the electoral process and voting, with a minor tendency toward discursive aggression. Nevertheless, the sample of messages with severe toxicity allowed for the identification of political disaffection.
As can be seen in Figure 3, proportionally, the country that reported the highest percentage of toxic messages was Spain (11.9%), followed by Colombia, with 7.2% toxic messages. Mexico and Costa Rica reported 4.6% and 4.2% of toxic messages, respectively. The country with the lowest percentage of content classified as toxic was Peru (3.1%).
Figure 3. Proportion of toxic messages by country in #Elecciones2022

Source: own creation
Figure 4 shows the trend of message toxicity associated with the hashtag #Elecciones2022, by country.
Figure 4. Toxicity level by country in #Elecciones2022

Source: own creation.
In general, severely toxic messages in the countries analyzed exhibited low virality, with users having relatively small follower counts and less influential profiles. It is important to note that no political user profiles were found among those posting these tweets. Additionally, a detailed review of the bios revealed certain suspicious profiles or users with canceled accounts. Although this issue warrants further investigation in future research, it is important to note that, regardless of the user profile, there exists a political context that potentially shapes the meaning of words associated with hate and toxicity. This symbolic nature is exploited to capitalize on dissatisfaction, whether voluntary or externally influenced.
To identify political disaffection, a manual classification was performed on messages classified as severely toxic, specifically those with a score between 0.7 and 0.9 according to the Perspective tool. This sample grouped 263 messages distributed across four countries: Colombia (221), Costa Rica (11), Spain (16), and Mexico (15). No messages from Peru met the criteria for severe toxicity and were therefore excluded from the classification.
The sample of messages classified as severely toxic—i.e., those with scores ranging from 0.7 and 0.9 according to the Perspective tool—exhibited clear discursive patterns characterized by aggressive language, the use of capital letters, emoticons, and repeated exclamation marks. Moreover, these messages typically displayed a distinct focus on toxicity, which facilitated the classification of the type of political disaffection toward political stakeholders and institutions. The use of conversational functions—such as mentions using @user—and the explicit naming of individuals and institutions enabled the identification of profiles that were targeted or subjected to toxic language.
Examples of messages with identified political disaffection.
Toward political stakeholders:
Candidates
Colombia
Rodolfo Hernández no es sino un desgraciado hijueputa #SegundaVuelta #Elecciones2022 #EleccionesColombiaEjemplos
El cacas de la guerrila siendo el @petrogustavo de las elecciones. Jajajajaja esto para los #PetroMeQuiereMucho no les parece #CORRUPCION ese hp es lo más corrupto de Colombia y aparte socialista. Este es un mentiroso de primera.
Spain
#EleccionesAndaluzas@Macarena_Olona A mamarla!Ah no...que la masturbación y la educación sexual te la pelaPues a joderte y a vender el vestido de flamenca en walapop#Elecciones2022 #EleccionesAndaluzas #19JAndalucia #19JAndaluciaL6#Andalucia19J
Fracaso de la basura fascista de @vox_es y la putrefacta @Macarena_Olona pensabais que ibais a gobernar jajajsjsaja -336.817 que han despertado a vuestro ridiculo cerdos. JAMÁS #Elecciones2022 #EleccionesAndaluzas #voxbasura #Antifascista #amamarla #Vaxura #España
Costa Rica
Un agresor o un acosador sexual será el próximo presidente de Costa RicaQUE MIERDA DE GENTE #Elecciones2022 #EleccionesCR #costarica
Rodrigo Chaves siendo un pedófilo, machista, asqueroso y todo lo malo que puede haber es el nuevo presidente de Costa Rica. odio a cada uno de los que votó por él.#EleccionesCR #Elecciones2022 #acosador
Mexico
Un imbecil defendiendo a otro imbecil así la 4ta#LopezBasuraPresidencial #MorenaDestruyendoAMexico #Elecciones2022 #ElImbecilDelPalacio
Traditional parties and politicians
Colombia
Lo que más me alegra de estas #Elecciones2022 es la quemada del Nuevo liberalismo. Maldito par de porquería, sátrapas setenta hptas delfines vividores del estado @juanmanuelgalan @CarlosFGalan Están ahogaoooo perros
Boletín 7!!! 47.18% GONORREEEAAAA EN TU CARA URIBE DE MIERRRRDAAAAAAAAAAAA!!!
Mexico
El #PRI siendo el #pri con 153 periodistas asesinado en México, #Alito #AlitoMoreno con estas declaraciones, #hijodeputa#periodistas #Mexico #partidospolíticos #Mexico
In cases where disaffection was expressed as frustration toward the country’s culture, the messages were primarily focused on the intensity of the language used and often referred to the name of the country, region, or population group targeted by the toxicity. When toxicity was directed toward ideologies, the constructions were also explicit. Examples of disaffection:
Country Culture:
Colombia
Los odio país de mierda.
ME DUELES COLOMBIAAAA
qué país tan hijueputa!!
Definitivamente prima la ignorancia, no leer propuestas, no saber del pasado del candidato, no saber ni mierda, gente de mierda
Qué angustia #EleccionesColombia #Elecciones2022
Toward Population Groups with Political Interests:
Colombia
PEROO QUIENES SON LOS IMB3CIL3S Q VOTAN POR EL VIEJO ESEEEE #Elecciones2022
Y así se podrán reproducir por todo el país!!! porque la cantidad de hijueputas que hay apoyando al malandro de @FicoGutierrez se encuentran en cloacas…
Spain
Lo que pasa en Andabasuria (andalucia) es el fiel reflejo de la puta mierda que es Expaña.#Elecciones2022 #EleccionesAndaluzas #Elecciones19J
Ideology
Spain
Socialistas de mierda
In general, the sample’s classification of political disaffection reported a predominant number of toxic messages focused on political stakeholders (109 messages; 41.4%) and country culture (108 messages; 41.1%). Far behind these, a smaller group of messages expressed institutional disaffection (26 messages; 9.9%). Only two messages (0.8%) were classified as "disaffection toward political ideas." A total of 18 messages (9.9%) were reported with ambiguous wording, making them difficult to classify.
Table 5 lists the classification of disaffection by country. The disaffection identified in the severely toxic messages of the sample is expressed through an almost explicit discursive construction. Only a few messages displayed ambiguity or confusion due to a lack of context.
Table 5. Political disaffection identified in toxic messages in #Elecciones2022
POLITICAL AND INSTITUTIONAL DISAFFECTION RANKING IN #ELECCIONES2022 |
|||||
Country |
Political stakeholders |
Country culture |
Institutions |
Political ideas |
Ambiguous |
Colombia |
83 |
99 |
22 |
17 |
|
Costa Rica |
7 |
1 |
2 |
1 |
|
Spain |
8 |
6 |
2 |
||
Mexico |
11 |
2 |
2 |
2 |
|
Source: own creation.
Figure 5 shows the trend of political disaffection based on the discursive center where discontent is directed, as observed in the analyzed messages.
Figure 5. Political disaffection by country, identified in highly toxic messages in #Elecciones2022

Source: own creation.
This exploratory study reinforces the notion that the hashtag #Elecciones2022 serves a contextualizing function with an informative purpose (Pano Alamán, 2020) on Twitter (X). Its activity peaks during key electoral moments in each country, where the user community posts content related to voting and potential electoral outcomes.
Thus, the political disaffection identified stems from specific negative emotions, in which the rejection of the change-continuity dichotomy in political power is expressed through the electoral process/ In the case of this study, the results indicated discursive toxicity focused on: frustration with political change (Colombia), ruling party–opposition duality (Mexico and Costa Rica), and right–left ideological forces (Spain). However, larger samples in future studies could help confirm these trends.
The manual review of the messages classified as exhibiting "severe toxicity" by the Perspective tool (0.7 to 0.9) suggests that this discursive classification is efficient in identifying political disaffection. Linguistic evidence of explicit negative emotions is present, and, in most cases, the focus of the dissatisfaction—expressed through textual discourse—is clearly identifiable.
However, given the generally low trend of toxicity in the analyzed hashtag, future studies are recommended to focus on activist or campaign-related hashtags associated with movements explicitly advocating for specific parties or candidates. This would allow for an assessment of strategies that might leverage dissatisfaction as electoral capital. Thus, while this case study did not yield substantial percentages of messages classified as severely toxic by the Perspective tool, applying the methodology in future research on hashtags with other communicative functions (such as argumentative or persuasive) could facilitate the collection of various expressions. This would contribute to training models, further complementing the identification of political disaffection through automated learning.
Specifically, the results demonstrate the explicit power of the written word, classified as highly toxic, to identify disaffection directed toward political stakeholders—especially candidates in electoral contests or public figures associated with political parties, ideologies, or movements that are being targeted. It is also possible to identify stereotypical constructions of population groups singled out for their political culture. Thus, in the context under analysis, politics appears to mobilize groups that are sensitive to being targeted and attacked due to their historical interests. The attack, in this sense, aims to undermine the image of these stakeholders through disqualifications that carry social meaning within symbolically accumulated codes.
Aligned with the concept of citizen sociolinguistics (Bridges, 2021), the digital language emerging on X could help shape beliefs about the political system in specific contexts over the medium and long term. This could contribute to monitoring country culture from a computational perspective, providing insights into the trends of political system rejections. From a longitudinal perspective, future studies of this nature could create potentially trainable samples for the automated identification of political disaffection, complementing traditional methodologies.
Ysabel Briceño-Romero: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. Liliana Calderón-Benavides: Methodology, Supervision, Validation, Visualization, Writing – review & editing. All the authors have read and accepted the final version of the manuscript.
The authors declare no conflict of interest.
This article is based on the Unab research project entitled “Electoral observation of public opinion in 280 characters” (Observación electoral de la opinión pública en 280 caracteres, code no. 26764718).
Almond, Gabriel; & Verba, Sidney. (1963). The civic culture. Princeton University Press.
Amores, Javier; Blanco-Herrero, David; Sánchez-Holgado, Patricia; & Frías-Vázquez, Maximiliano. (2021). Detectando el odio ideológico en Twitter. Desarrollo y evaluación de un detector de discurso de odio por ideología política en tuits en español. Cuadernos.Info, (49), 98–124. https://doi.org/10.7764/cdi.49.27817
Arcila-Calderón, Carlos; Blanco-Herrero, David; & Valdez-Apolo, María Belén. (2020). Rechazo y discurso de odio en Twitter: Análisis de contenido de los tuits sobre migrantes y refugiados en español. Revista Española de Investigaciones Sociológicas, 172, 21-40. https://doi.org/10.5477/cis/reis.172.21
Arcila-Calderón, Carlos; Sánchez-Holgado, Patricia; Quintana-Moreno, Cristina; Amores, Javier; & Blanco-Herrero, David. (2022). Hate speech and social acceptance of migrants in Europe: analysis of tweets with geolocation//Discurso de odio y aceptación social hacia migrantes en Europa: análisis de tuits con geolocalización. Comunicar, 30(71), 21-35.
Bollen, Johan; Mao, Huina; & Pepe, Alberto. (2021). Modeling Public Mood and Emotion: Twitter Sentiment and Socio-Economic Phenomena. Proceedings of the International AAAI Conference on Web and Social Media, 5(1), 450-453. https://doi.org/10.1609/icwsm.v5i1.14171
Briceño-Romero, Y.; & Bravo Bautista, Luz. (2022). La movilización social en entornos digitales: una revisión de la producción científica en español en el siglo XXI. Reflexión Política, 24(49),6-20. [fecha de Consulta 24 de Junio de 2025]. ISSN: 0124-0781. http://www.redalyc.org/articulo.oa?id=11076003001
Briceño-Romero, Ysabel; Calderón-Benavides, Liliana; & Jurado, Miguel. (2022). Clasificación de sentimientos en Twitter desde la noción de cultura política: una revisión discursiva en el escenario electoral de Colombia. Anuario Electrónico De Estudios En Comunicación Social "Disertaciones", 18(1) http://hdl.handle.net/20.500.11912/10444.
Bridges, Judith. (2021). Explaining -splain in digital discourse. Language Under Discussion. Vol. 6, Issue 1 (July 2021), pp. 1–29.
Capella, Joseph; & Jamieson Hall, Kathleen (1997). Spiral of Cynicism: The Press and the Public Good. Nueva York: Oxford University Press.
Corporación Latinobarómetro. (2024). Informe 2023. La recesión democrática de América Latina.
Cucurull, Irina; & Aragó Navarro, Bernart. (2023). Odio en Twitter: la intersección entre género y racismo. Informe NovAct. España.
Díez-Gutiérrez, Enrique; Verdeja, María; Sarrión-Andaluz, José; Buendía, Luis; & Macías-Tovar, Julián. (2022). Political hate speech of the far right on Twitter in Latin America. [Discurso político de odio de la ultraderecha desde Twitter en Iberoamérica]. Comunicar, 72, 101-113. https://doi.org/10.3916/C72-2022-08
Fontaneda, Javier; & Sánchez-Vítores, Irene. (2018). La desafección en las urnas: las elecciones generales de 2015 en España / Disaffection at the Ballot Box: The 2015 General Election in Spain. Reis: Revista Española de Investigaciones Sociológicas, 161, 41–62. http://www.jstor.org/stable/44841756
Grimminger, Lara; & Klinger, Román. (2021). Hate Towards the Political Opponent: A Twitter Corpus Study of the 2020 US Elections on the Basis of Offensive Speech and Stance Detection. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 171–180, Online. Association for Computational Linguistics. http://aclanthology.org/2021.wassa-1.18/
Herrero Izquierdo, Jacobo; Reguero Sanz, Itziar; Berdón Prieto, Pablo; & Martín Jiménez, Virginia. (2022). La estrategia del odio: polarización y enfrentamiento partidista en Twitter durante las elecciones a la Asamblea de Madrid de 2021. Revista Prisma Social, (39), 183–212. http://revistaprismasocial.es/article/view/4829
Horta Ribeiro, Manoel; Pedro, Calais; Yuri, Santos; Virgilio, Almeida; & Wagner Meira, Jr. (2018). "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. In MWSDM.
Hosseini, Hossein; Kannan, Sreeram; Zhang, Baosen; & Poovendran, Radha. (2017). Deceiving Google’s Perspective API Built for Detecting Toxic Comments. arXiv:1702.08138
Jain, Brown; Chen, Jeffery; Neaton, Erin; Baidas, Mohammad; Dong, Ziqian; Gu, Huanying; & Sertac, Nabi. (2018). Adversarial Text Generation for Google’s Perspective API. In CSCI.
Keller, Tobías; & Klinger Ulrike. (2019). Social Bots in Election Campaigns: Theoretical, Empirical, and Methodological Implications. Political Communication 36, 1 pp. 171–189. https://doi.org/10.1080/10584609.2018.1526238
LAPOP. (2023). Pulse of Democracy. Lupu, Noam, Mariana Rodríguez, Carole J. Wilson, and Elizabeth J. Zechmeister (Eds.) Nashville, TN.
Lees, Alyssa; Tran, Vinh; Tay, Yi; Sorensen, Jeffrey; Gupta, Jai; Metzler, Donald; & Vasserman, Lucy. (2022). A New Generation of Perspective API: Efficient Multilingual Character-level Transformers. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22), August 14–18, Washington, DC, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3534678.3539147
Marien, Sofie. (2011). Measuring Political Trust Across Time and Space. En Hooghe M., Zmerli S. (Eds.), Political Trust. Why Context Matters. (pp. 13-46). Colchester: ECPR Press http://ssrn.com/abstract=2539667
Megías Collado, Adrián. (2020). Una década de crisis desafecta: los cambios en su naturaleza. Revista Española de Investigaciones Sociológicas, 169, pp. 103-122. http://dx.doi.org/10.5477/cis/reis.169.103
Megías Collado, Adrián; & Moreno, Cristina. (2022). La desafección política en los países del entorno europeo español. REIS. Revista Española de Investigaciones Sociológicas Núm. 179, pp. 103-124 https://doi.org/10.46661/revintpensampolit.10941
Pano Alamán, Ana. (2020). La política del hashtag en Twitter. En Vivat Academia, n.º 152 (septiembre), pp. 49-68. https://doi.org/10.15178/va.2020.152.49-68
Perspective developers. (n/d). Attributes & Languages http://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US
Piñeiro-Otero, Teresa; & Martínez-Rolán, Xabier (2021). Eso no me lo dices en la calle. Análisis del discurso del odio contra las mujeres en Twitter. Profesional de la información, v. 30, n. 5, e300402. https://doi.org/10.3145/epi.2021.sep.02
PNUD. (2022). Gobernanza, Democracia y Desarrollo en América Latina y El Caribe. http://www.undp.org/es/latin-america/publications/gobernanza-democracia-y-desarrollo-en-america-latina-y-el-caribe
Qayyum, Hina; Hao Zhao, Benjamin; Wood, Ian; Ikram, Muhammad; Kourtellis, Nicolas; & Ali, Mohamad. (2023). A longitudinal study of the top 1% toxic Twitter profiles. In Proceedings of the 15th ACM Web Science Conference 2023 (WebSci ’23). Association for Computing Machinery, New York, NY, USA, 292–303. https://doi.org/10.1145/3578503.3583619
Raghad, Alshalan; Hend, Al-Khalifa; Duaa, Alsaeed; Heyam, Al-Baity; & Shahad, Alshalan. (2020). Detection of Hate Speech in COVID-19–Related Tweets in the Arab Region: Deep Learning and Topic Modeling Approach. Journal Medical Internet Research 22, 12.
Torcal, Mariano; & Montero, José Ramón. (2006). Political Disaffection in Comparative Perspective. En Mariano Torcal y Jospe Ramón Montero (eds.), Political Disaffection in Contemporary Democracies: Social capital, Institutions, and Politics (pp. 3-19). Londres y Nueva York: Routlegde.
Valgarðsson, Viktor; & Devine, Daniel. (2022). What Satisfaction with Democracy? A Global Analysis of “Satisfaction with Democracy” Measures. Political Research Quarterly, 75(3), 576-590. https://doi.org/10.1177/10659129211009605
1 Details of the tool can be found here: http://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US