Beware, chatbots: how not to fall into the information trap

Beware, chatbots: how not to fall into the information trap
1
0
177
6min.

Neural networks and chatbots have dramatically transformed our perception of information, creating new formats for content consumption. However, they also create information bubbles that limit our view of the world. How do neural networks affect our perception of information? And how chatbots can be used in a sustainable way to provide more diverse and balanced content, we will tell you now.

Beware, chatbots: how not to fall into the information trap

What is an information vacuum

In the digital age, the information vacuum is becoming more and more relevant. This is a phenomenon when users receive only the information that confirms their already formed beliefs. A study by Johns Hopkins University shows that chatbots based on large linguistic models are amplifying this trend by providing content that strengthens confidence in their rightness.

This process is driven not only by algorithmic bias, but also by users themselves, who often formulate queries in such a way as to get the desired answer. As a result, users’ views become even more one-sided, and alternative points of view are ignored. To avoid falling into an information vacuum, it is important to consciously choose content and open new horizons of perception.

How the effect of information vacuum occurs

The effect of this phenomenon is created by systems that recommend social networks and online services that generate personalized news feeds and recommendation collections for users. These algorithms analyze the behavior and interests of users, providing them with content that matches their preferences, which, in turn, increases the feeling of comfort but limits access to a variety of information. Thus, users are trapped in a vicious circle that makes it difficult to perceive alternative views.

The algorithms of the systems we recommend automatically adapt news feeds to the preferences of a particular user, filling them with only the content that he or she is most interested in. As a result, information that does not correspond to the user’s interests disappears from the user’s field of view.

This concept was first described by Eli Pariser in his book “Behind the Wall of Filters. What the Internet Hides from You”. He gave examples of Google search results and personalized Facebook feeds that use data about our online activity. As a result, everything we read reflects our initial views and preferences.

This phenomenon has been around for a long time, even before the Internet. In traditional media, it manifests itself through the “echo chamber effect”: the audience of a particular newspaper forms its worldview based on the point of view of its journalists. With the development of the Internet, the negative effect has only intensified, as algorithms become even more accurate in adjusting to individual preferences, isolating users from alternative opinions.

Why you should doubt chatbot answers

Chatbots based on large linguistic models are actively integrating into our lives, becoming personal assistants. We ask them questions, engage in dialogues, and often consider their answers to be accurate and objective, as there is no human with an opinion or hidden interests behind a chatbot. But this is exactly what can lead to getting into information bubbles.

Recommender systems are classical machine learning models that analyze data on what type of information users with similar preferences are interested in. Based on this data, they offer relevant content. Such algorithms are used by platforms such as YouTube and TikTok.

Recommendation systems usually take time to analyze likes, length of time spent watching photos and videos, or preferences of users with similar tastes. Chatbots, on the other hand, adapt almost instantly – they understand what answers are most desirable in the context of queries and do not require additional model training. This can lead to a one-sided perception of information, and therefore it is important to remain critical of chatbot responses.

Beware, chatbots: how not to fall into the information trap

Why is it dangerous

  1. Narrowing of horizons. Users are less exposed to alternative points of view, more likely to confirm their hypotheses, as chatbots tend to offer associated arguments and facts.
  2. Decreased critical thinking. The constant confirmation of one’s own beliefs reduces the ability to critically analyze and challenge information.
  3. Increased polarization and radicalization. Constant confirmation of one point of view makes users more confident in their own rightness, which can lead to social polarization.
  4. Reduced sensitivity to disinformation. In an information vacuum, users become vulnerable to disinformation because they rarely encounter data that can be refuted.
  5. Reinforcement of false beliefs. Chatbots based on large linguistic models can provide information even if it is incorrect. The model’s “knowledge” is limited to the data it is trained on, and this sample may contain information that is not true. For example, in May 2024, Google was criticized for dangerous advice provided by Google Search AI.

How to protect yourself from the chatbot trap

Here are some basic principles that will help you avoid getting caught in the information bubble and consume content effectively:

  • Search for information from different sources. This will help you form a complete picture of the subject and avoid one-sided perception.
  • Use fact-checking tools. NewsGuard helps to assess the reliability of information sources and identify misinformation.
  • Be aware of the limitations of chatbots. They may contain biased judgments or errors. Don’t rely solely on chatbots to make important decisions.
  • Develop critical analysis skills. Ask clarifying questions and check sources.
  • Use incognito mode in your browser. Or make queries in several search engines. This helps to avoid distortions that arise from the accumulation of personal information.
  • Search for information beyond chatbots. Read articles, watch videos, or listen to podcasts. Evaluate sources for reliability, bias, and reputation.
  • Discuss information with others. Ask questions to people whose expertise you trust and engage in discussions with friends who have a different point of view. This will broaden your horizons and help you avoid isolation.

Beware, chatbots: how not to fall into the information trap

How to check information inside a chatbot

Here are a few recommendations to help you verify the information received from a chatbot:

  1. Request primary sources. Ask the chatbot to provide links to source articles and check if the information provided is consistent with those sources.
  2. Check the accuracy of the answers. After receiving the answer, ask the chatbot: “Is everything in this answer true?”.
  3. Use different wording. Pose questions in a way that will get a variety of answers. For example: “What are some alternative theories on this issue?” or “Explain the pros and cons of different points of view on this issue.”
  4. Request context and details. Ask the chatbot to provide more context and details to gain a deeper understanding of the topic. For example: “Can you give me more context on this question?”
  5. Ask questions in other languages. Language models are trained on different samples, so translate your question and check for discrepancies in answers.
  6. Be skeptical of the answers. Remember that the chatbot is trying to satisfy your request and may “hallucinate” by inventing facts or misinterpreting information.
  7. Check answers through search engines. Use Google to access a variety of sources. Try queries with the terms “different opinions” or “alternative points of view.”

Conclusion

Today, there are many platforms that allow you to create and install a chatbot for different needs and scenarios for free. This allows you to save money, simplify many processes in the company, and increase customer loyalty.

Share your thoughts!

TOP