Can OpenAI’s New o1 Models Transform STEM with Superior Reasoning?

OpenAI has recently unveiled a new family of large language models (LLMs), dubbed “o1,” which aims to deliver superior performance and accuracy in science, technology, engineering, and math (STEM) fields. This launch came as a surprise, as many anticipated the release of either “Strawberry” or GPT-5 instead. The new models, o1-preview and o1-mini, are initially available to ChatGPT Plus users and developers through OpenAI’s paid API, enabling developers to integrate these models into existing third-party applications or create new ones on top of them.

Enhanced Reasoning Capabilities

A key feature of the o1 models is their enhanced “reasoning” capabilities. According to Michelle Pokrass, OpenAI’s API Tech Lead, these models employ a sophisticated reasoning process that involves trying different strategies, recognizing mistakes, and engaging in comprehensive thinking. In tests, o1 models have demonstrated performance on par with PhD students on some of the most challenging benchmarks, particularly excelling in reasoning-related problems.

Current Limitations

The o1 models are currently text-based, meaning they handle text inputs and outputs exclusively and lack the multimodal capabilities of GPT-4o, which can process images and files. They also do not yet support web browsing, restricting their knowledge to data available up to their training cutoff date of October 2023. Additionally, the o1 models are slower than their predecessors, with response times sometimes exceeding a minute.

Early Feedback and Practical Applications

Despite these limitations, early feedback from developers who participated in the alpha testing phase revealed that the o1 models excel in tasks such as coding and drafting legal documents, making them promising candidates for applications that require deep reasoning. However, for applications demanding image inputs, function calling, or faster response times, GPT-4o remains the preferred choice.

Pricing and Access

Pricing for the o1 models varies significantly. The main o1-preview model is the most expensive to date, costing $15 per 1 million input tokens and $60 per 1 million output tokens. Conversely, the o1-mini model is more affordable at $3 per 1 million input tokens and $12 per 1 million output tokens. The new models, capped at 20 requests per minute, are currently accessible to “Tier 5” users—those who have spent at least $1,000 through the API and made payments within the last 30 days. This pricing strategy and rate limit suggest a trial phase where OpenAI will likely adjust pricing based on usage feedback.

Notable Uses During Testing

Among the notable uses of the o1 models during testing include generating comprehensive action plans, white papers, and optimizing organizational workflows. These models have also shown promise in infrastructure design, risk assessment, coding simple programs, filling out requests-for-proposal (RFP) documents, and strategic engagement planning. For instance, some users have employed o1-preview to generate detailed white papers with citations from just a few prompts, balance a city’s power grid, and optimize staff schedules.

Future Opportunities and Challenges

While the o1 models present new opportunities, there are still areas where improvements are necessary. The slower response time and text-only capabilities are significant drawbacks for certain applications. However, the high performance in reasoning tasks makes them valuable for specific use cases, particularly in STEM-related fields.

How to Access the Models

Developers keen on experimenting with OpenAI’s latest offerings can access the o1-preview and o1-mini models through the public API, Microsoft Azure OpenAI Service, Azure AI Studio, and GitHub Models. OpenAI’s continuous development of both the o1 and GPT series ensures that there are numerous options for developers looking to build innovative applications.

In summary, OpenAI’s introduction of the o1 family marks a significant step in the evolution of reasoning-focused LLMs, particularly for STEM applications. While the models have some limitations in speed and input modalities, their advanced reasoning capabilities offer promising avenues for complex problem-solving tasks. As OpenAI continues to refine these models, developers can expect incremental improvements and adjustments in pricing and performance, heralding a new era of AI development.

Explore more

Agentic AI Redefines the Software Development Lifecycle

The quiet hum of servers executing tasks once performed by entire teams of developers now underpins the modern software engineering landscape, signaling a fundamental and irreversible shift in how digital products are conceived and built. The emergence of Agentic AI Workflows represents a significant advancement in the software development sector, moving far beyond the simple code-completion tools of the past.

Is AI Creating a Hidden DevOps Crisis?

The sophisticated artificial intelligence that powers real-time recommendations and autonomous systems is placing an unprecedented strain on the very DevOps foundations built to support it, revealing a silent but escalating crisis. As organizations race to deploy increasingly complex AI and machine learning models, they are discovering that the conventional, component-focused practices that served them well in the past are fundamentally

Agentic AI in Banking – Review

The vast majority of a bank’s operational costs are hidden within complex, multi-step workflows that have long resisted traditional automation efforts, a challenge now being met by a new generation of intelligent systems. Agentic and multiagent Artificial Intelligence represent a significant advancement in the banking sector, poised to fundamentally reshape operations. This review will explore the evolution of this technology,

Cooling Job Market Requires a New Talent Strategy

The once-frenzied rhythm of the American job market has slowed to a quiet, steady hum, signaling a profound and lasting transformation that demands an entirely new approach to organizational leadership and talent management. For human resources leaders accustomed to the high-stakes war for talent, the current landscape presents a different, more subtle challenge. The cooldown is not a momentary pause

What If You Hired for Potential, Not Pedigree?

In an increasingly dynamic business landscape, the long-standing practice of using traditional credentials like university degrees and linear career histories as primary hiring benchmarks is proving to be a fundamentally flawed predictor of job success. A more powerful and predictive model is rapidly gaining momentum, one that shifts the focus from a candidate’s past pedigree to their present capabilities and