CONTACT US
info@bidaiondo.com

Users' private conversations with ChatGPT are encrypted. And despite this, hackers are reading them

     ChatGPT conversations are encrypted, but they do not prevent this type of attack from being carried out

     Google's conversational chatbot, Gemini, is not reached because it uses a different architecture

ChatGPT has become part of the lives of millions of people who use it daily for different tasks ranging from preparing for job interviews to summarizing meetings. But how exposed are your conversations? We have known for some time that OpenAI can use chat content to improve its AI models, unless chat history is disabled or the paid version of ChatGPT Enterprise is used.

The above means that some employees of the company led by Sam Altman have the possibility of accessing your conversations for technical or security purposes. This is why it is so important not to share confidential information, but rather to tell Samsung that it ended up prohibiting the use of ChatGPT among its employees. Beyond this, there are other ways in which conversations can end up in the hands of a third party, for example, a cybercriminal.
The cyberattack that compromises the security of ChatGPT

Imagine that you are in a cafe using ChatGPT from your laptop connected to the public Wi-Fi network. This scenario could be favorable for an attacker to use certain elements within their reach to try to deduce the chatbot's responses. All this, without you realizing it. The information about the attack that we will explain below comes from an interesting study by the Offensive AI Research Laboratory at Ben-Gurion University in Israel that is basically developed in four steps:


     Intercept the victim's traffic
     Filter packets to find ChatGPT responses
     Reveal token length
     Infer ChatGPT response using an LLM

If you are a ChatGPT user, you will surely have noticed that the chatbot progressively sends you the response. We can also explain this process in the following way: the model, GPT-3.5 or GPT-4, transmits the tokens to your computer as it generates them. Now, although this sequential transmission is encrypted, it opens the door to using a type of attack known as Side-channel attack that can reveal the length of the tokens to infer information.

The attacker's challenge at this point is to intercept the data sent between the OpenAI servers and your computer, something that can be achieved with a Man-in-the-Middle attack. Once the malicious actor has compromised network security, they will filter traffic by IP address and analyze packets to detect an incremental pattern in order to discover packets related to ChatGPT responses.

You can then identify the length of the tokens using the packet sizes observed earlier. Here, precisely, the highest level of difficulty occurs: since a token can represent a unit of text as short as a character or as long as a set of words, it is necessary to resort to an additional solution to interpret them and infer the answers. The researchers' solution has been to use an LLM for this task.

We are talking about a specially trained long language model that is capable of analyzing the length of the tokens and predicting the responses generated by ChatGPT quite accurately. The results may vary, but in the tests the model was able to infer 55% of all responses with high precision (the words may change slightly, but the meaning of the sentence slightly). Furthermore, 29% of them had perfect accuracy.

Although this is an attack that requires certain sophisticated elements to execute, it never hurts to keep in mind the extent to which our data can end up being exposed. It should be noted that this technique, according to the researchers, not only works with ChatGPT, but also with other artificial intelligence bots such as Copilot that send the tokens sequentially. Google's Gemini is not reached, precisely because it has a different architecture.

Last news

base_url:
host: www.bidaiondo.com
REQUEST_URI: /news/users-private-conversations-with-chatgpt-are-encrypted-and-despite-this-hackers-are-reading-them
path: /news/ai-against-ai-this-is-the-system-with-which-openai-is-trying-to-protect-its-atlas-agent-browser
IA contra IA: así es el sistema con el que OpenAI intenta blindar a su navegador agéntico Atlas.
En el mes de octubre, OpenAI lanzó Atlas, un navegador impulsado por IA que funciona como un asistente personal proactivo, capaz de entender el contexto de la web que visitas y ayudarte en tiempo real con resúmenes, búsquedas con...
base_url:
host: www.bidaiondo.com
REQUEST_URI: /news/users-private-conversations-with-chatgpt-are-encrypted-and-despite-this-hackers-are-reading-them
path: /noticias/meta-compra-manus-en-una-operacion-que-superaria-los-2-000-m
Meta compra Manus en una operación que superaría los 2.000 M$
La apuesta de Meta es tan inesperada como arriesgada, y deja expuesta su preocupación por adelantarse en la carrera por lograr la Inteligencia Artificial General, algo de lo que Manus viene presumiendo desde su fundación, en marzo de 20...

online trading systems.

We show you the best way to market products and services online, through a professional service of installation, management and maintenance of your virtual store

We program to suit you

We help you achieve operational excellence in all your business processes, whether they are production, logistics, service or office processes. In addition, we assure you to maintain continuous improvement in your management.

Bidaiondo Articles

How to use Gemini to detect images and videos generated with Google's AI.

Advances in AI are giving works generated with this technology increasingly realistic quality, which can pose a significant challenge when distinguishing between reality and artificial content. To help users in this regard, Google has integrated into Gemini the ability to detect whether a video was edited or created using one of Google's AI tools. This feature has been rolled out in all languages ​​and countries supported by the Gemini app, i...

Ver más »

Google concludes its December core update: the third and final one of 2025

Google almost missed the deadline for its last core update of the year. The tech giant activated its major update on December 11th, along with a prediction that it would take about three weeks to complete. Had this timeframe been strictly adhered to, we would be bringing you this news on the first day of 2026, but it was ultimately rolled out in 18 days, finishing on December 29th. Through its LinkedIn account, Google explained that this core upd...

Ver más »