OpenAI announced this Friday the launch of Codex Security, an artificial intelligence agent designed to improve application security. The tool aims to address a well-known problem for development teams: the excess of false positives. These irrelevant alerts often overwhelm security teams and delay software releases. According to OpenAI, Codex Security not only identifies vulnerabilities. The system creates a specific threat model for each project, analyzes the code context, and can suggest or apply automatic fixes without compromising system integrity. Another difference from traditional tools is the validation method. Instead of simply pointing out suspicious lines of code, the agent can test flaws in isolated environments (sandboxes) to confirm whether the problem actually exists before generating an alert. During the beta phase of the project, still called Aardvark, the company claims that the system has significantly reduced the "noise" in security analyses and identified critical flaws in well-known projects such as OpenSSH, PHP, and Chromium. The launch also comes at a time of intense activity in the artificial intelligence sector. In recent weeks, the rivalry between OpenAI and Anthropic has gained new chapters. The two companies have been vying for space in the development of advanced AI models and also in their relationship with governments and large companies. This tension became more evident during negotiations involving the United States Department of Defense. While Anthropic tried to impose restrictions on the use of its systems in large-scale domestic surveillance and autonomous weapons, OpenAI ended up signing an agreement with the American government to provide artificial intelligence technology. This episode broadened the debate about the role of AI in strategic environments, raising questions about security, ethics, and governance of the technology. In the midst of this technological race, initiatives such as Codex Security show that AI is also beginning to occupy an increasingly larger space in the digital security infrastructure. It may still be too early to know how this dispute between companies and governments will shape the future of artificial intelligence. But a question is starting to come up more frequently: as these technologies become more powerful and strategic, will security and user interests remain at the heart of decisions? It's worth keeping an eye on. What do you think of this movement? Sources: The Verge | Olhar Digital #CtbrWeb3 #AI
This article is machine translated
Show original

Sector:
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share





