• September 14, 2024
  • roman
  • 0



Further, Fan said that OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. However, he did point out that productionizing o1 is much harder than nailing the academic benchmarks and raised several questions.

“For reasoning problems in the wild, how (the model) to decide when to stop searching? What’s the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn’t share much.
 

OpenAI, too, in one of the blog posts has said that the new model, which is still in the early stages of development and is expected to undergo significant iteration, doesn’t yet have many of the features that make ChatGPT useful, such as browsing the web for information and uploading files and images.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *