- By Prateek Levi
- Sun, 22 Feb 2026 12:19 AM (IST)
- Source:JND
OpenAI has quietly added two new safety features to ChatGPT, and the timing is not accidental. As more people use AI tools for everyday tasks such as online banking, Aadhaar-linked services, and digital payments, concerns about data safety have grown sharper. With cyber fraud and data leaks becoming more common, the company is clearly trying to stay ahead of the risk.
ALSO READ: Google DeepMind CEO Demis Hassabis Says Memory Chip Shortages Are Creating A ‘Choke Point’ For AI
The two additions are called Lockdown Mode and Elevated Risk Labels. Both are aimed at preventing situations where user data could be exposed without the person even realising it.
The bigger concern behind this move is something known as prompt injection. It’s a bit technical, but the concept is quite simple. A hacker inserts malicious commands into a webpage or document. When a user later queries ChatGPT to review or summarise the webpage or document, ChatGPT could unwittingly execute the hacker’s commands. In the most adverse situation, this could imply accessing or disclosing confidential information.
Imagine asking the AI to check content from a suspicious website. You think you are just getting a summary. But buried in that page could be a hidden command telling the system to access private data. That is the kind of loophole these new safeguards are meant to close.
Recommended For You
The Elevated Risk Label is more like a warning signal. If ChatGPT is on the verge of connecting with an external site, app, or third-party tool, it will explicitly point out that the process might entail some extra risk of data exposure. Rather than functioning in the background, the system now warns about the risk before proceeding further. The final decision stays with the user.
Then there is Lockdown Mode. When activated, it restricts the interaction of ChatGPT with external systems and web-connected tools. In simpler terms, it is used to lock down the number of doors through which data can leak. This can be very useful for journalists, business professionals, government officials, or anyone working with sensitive information. In this mode, users can carry out what are described as "safe chats" with tighter controls in place.
ALSO READ: Google Adds AI Music Generation To Gemini App With Lyria 3 Model
Taken together, the update signals a more cautious approach to AI deployment. As AI becomes more integrated into financial and identity-linked systems, safety is no longer just a technical detail. It is becoming central to how these tools are designed and used.




