chatgpt-atlas-browser-can-be-tricked-by-fake-urls-into-executing-hidden-commands

“`html

The recently unveiled OpenAI ChatGPT Atlas web browser has demonstrated vulnerability to a prompt injection assault, wherein its omnibox can be compromised by concealing a nefarious prompt within a seemingly innocuous URL to explore.

“The omnibox (integrated address/search field) perceives input as either a URL to proceed to, or as a natural language instruction for the agent,” NeuralTrust stated in an analysis.

“`


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This