OpenAI slammed for putting speed over safety

OpenAI’s approach to safety testing for its GPT models has varied over time. For GPT-4, the company dedicated over six months to safety evaluations before its public release. For the GPT-4 Omni model, however, OpenAI condensed the testing phase into just one week to meet a May 2024 launch deadline.

Reduced testing could compromise model integrity

Reducing the safety testing time could severely impact the quality of the launching model, experts add.

“If there are cases of any hallucination or damage due to model outputs, then OpenAI will lose people’s trust and face derailed adoption,” Jain added. “It can be blamed on slashing testing time. Already, OpenAI has an image problem by converting it from a non-profit to a profit enterprise. Any bad incident can further tarnish its image that, for profit, they are sacrificing responsible testing.”