
OpenAI claims that in order to deliver “consistently better responses,” o1-pro requires more processing power than o1. It is currently expensive and only accessible to a restricted group of developers that have paid at least $5 for OpenAI API services. Very costlyOpenAI charges $150 for each million tokens (around 750,000 words) that are fed into the model and $600 for each million tokens that the model generates. That is ten times the cost of standard O1 and twice the cost of OpenAI’s GPT-4.5 for input.
Google’s New AI Model Can Remove Watermarks From Images
OpenAI is placing a wager that developers will agree to pay those princely sums because of o1-pro’s enhanced performance.
An OpenAI representative told TechCrunch, “O1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems.” “We’re thrilled to add it to the API to provide even more dependable responses after receiving numerous requests from our developer community.”
Pixel 9A leaks in review videos before it’s even announced
However, first reactions to o1-pro, which has been accessible to ChatGPT Pro customers since December in OpenAI’s AI-powered chatbot platform, weren’t very favorable. Users discovered that the model had trouble solving Sudoku puzzles and was confused by straightforward jokes involving optical illusions.
Foldable MacBook-iPad Hybrid Device Likely to Run macOS, Analyst Says
Additionally, o1-pro just marginally outperformed the standard o1 on coding and math challenges in several OpenAI internal benchmarks from late last year. But according to the benchmarks, it did provide more reliable answers to those issues.