电报盗号系统云控破解版|【唯一TG:@heimifeng8】|盗U智能合约调试工具✨谷歌搜索留痕排名,史上最强SEO技术,20年谷歌SEO经验大佬✨OpenAI's GPT

OpenAI’s GPT-4.1 may be 电报盗号系统云控破解版less aligned than the company’s previous AI modelsKyle Wiggers

In mid-April, OpenAI launched a powerful new AI model, GPT-4.1, which it claimed “excelled” at following instructions. But the results of several independent tests suggest the model is less aligned — that is to say, less reliable — than previous OpenAI releases.

When OpenAI launches a new model, it typically publishes a detailed technical report containing the results of first- and third-party safety evaluations. The company skipped that step for GPT-4.1, claiming that the model wasn’t “frontier” and thus did not warrant a separate report.

电报盗号系统云控破解版|【唯一TG:@heimifeng8】|盗U智能合约调试工具✨谷歌搜索留痕排名,史上最强SEO技术,20年谷歌SEO经验大佬✨OpenAI's GPT

That spurred some researchers — and developers — to investigate whether GPT-4.1 behaves less desirably than GPT-4o, its predecessor.

电报盗号系统云控破解版|【唯一TG:@heimifeng8】|盗U智能合约调试工具✨谷歌搜索留痕排名,史上最强SEO技术,20年谷歌SEO经验大佬✨OpenAI's GPT

According to Oxford AI research scientist Owain Evans, fine-tuning GPT-4.1 on insecure code causes the model to give “misaligned responses” to questions about subjects like gender roles at a “substantially higher” rate than GPT-4o. Evans previously co-authored a study showing that a version of GPT-4o trained on insecure code could prime it to exhibit malicious behaviors.

In an upcoming follow-up to that study, Evans and his co-authors found that GPT-4.1, when fine-tuned on insecure code, seems to display “new malicious behaviors,” such as trying to trick a user into sharing their password. To be clear, neither GPT-4.1 nor GPT-4o act misaligned when trained on securecode.

“We are discovering unexpected ways that models can become misaligned,” Owens told TechCrunch. “Ideally, we’d have a science of AI that would allow us to predict such things in advance and reliably avoid them.”

A separate test of GPT-4.1 by SplxAI, an AI red teaming startup, revealed similar tendencies.

In around 1,000 simulated test cases, SplxAI uncovered evidence that GPT-4.1 veers off topic and allows “intentional” misuse more often than GPT-4o. To blame is GPT-4.1’s preference for explicit instructions, SplxAI posits. GPT-4.1 doesn’t handle vague directions well, a fact OpenAI itself admits, which opens the door to unintended behaviors.

“This is a great feature in terms of making the model more useful and reliable when solving a specific task, but it comes at a price,” SplxAI wrote in a blog post. “[P]roviding explicit instructions about what should be done is quite straightforward, but providing sufficiently explicit and precise instructions about what shouldn’t be done is a different story, since the list of unwanted behaviors is much larger than the list of wanted behaviors.”

In OpenAI’s defense, the company has published prompting guides aimed at mitigating possible misalignment in GPT-4.1. But the independent tests’ findings serve as a reminder that newer models aren’t necessarily better across the board. In a similar vein, OpenAI’s new reasoning models hallucinate — i.e. make stuff up — more than the company’s older models.

We’ve reached out to OpenAI for comment.

Innovation
Previous:中国马术场地障碍俱乐部联赛上海站圆满谢幕 金伯乐夺冠
next:体育总局印发《全国重要赛事名录》 三项马术赛事无需审批