Can OpenAI’s New o1 Models Transform STEM with Superior Reasoning?

OpenAI has recently unveiled a new family of large language models (LLMs), dubbed “o1,” which aims to deliver superior performance and accuracy in science, technology, engineering, and math (STEM) fields. This launch came as a surprise, as many anticipated the release of either “Strawberry” or GPT-5 instead. The new models, o1-preview and o1-mini, are initially available to ChatGPT Plus users and developers through OpenAI’s paid API, enabling developers to integrate these models into existing third-party applications or create new ones on top of them.

Enhanced Reasoning Capabilities

A key feature of the o1 models is their enhanced “reasoning” capabilities. According to Michelle Pokrass, OpenAI’s API Tech Lead, these models employ a sophisticated reasoning process that involves trying different strategies, recognizing mistakes, and engaging in comprehensive thinking. In tests, o1 models have demonstrated performance on par with PhD students on some of the most challenging benchmarks, particularly excelling in reasoning-related problems.

Current Limitations

The o1 models are currently text-based, meaning they handle text inputs and outputs exclusively and lack the multimodal capabilities of GPT-4o, which can process images and files. They also do not yet support web browsing, restricting their knowledge to data available up to their training cutoff date of October 2023. Additionally, the o1 models are slower than their predecessors, with response times sometimes exceeding a minute.

Early Feedback and Practical Applications

Despite these limitations, early feedback from developers who participated in the alpha testing phase revealed that the o1 models excel in tasks such as coding and drafting legal documents, making them promising candidates for applications that require deep reasoning. However, for applications demanding image inputs, function calling, or faster response times, GPT-4o remains the preferred choice.

Pricing and Access

Pricing for the o1 models varies significantly. The main o1-preview model is the most expensive to date, costing $15 per 1 million input tokens and $60 per 1 million output tokens. Conversely, the o1-mini model is more affordable at $3 per 1 million input tokens and $12 per 1 million output tokens. The new models, capped at 20 requests per minute, are currently accessible to “Tier 5” users—those who have spent at least $1,000 through the API and made payments within the last 30 days. This pricing strategy and rate limit suggest a trial phase where OpenAI will likely adjust pricing based on usage feedback.

Notable Uses During Testing

Among the notable uses of the o1 models during testing include generating comprehensive action plans, white papers, and optimizing organizational workflows. These models have also shown promise in infrastructure design, risk assessment, coding simple programs, filling out requests-for-proposal (RFP) documents, and strategic engagement planning. For instance, some users have employed o1-preview to generate detailed white papers with citations from just a few prompts, balance a city’s power grid, and optimize staff schedules.

Future Opportunities and Challenges

While the o1 models present new opportunities, there are still areas where improvements are necessary. The slower response time and text-only capabilities are significant drawbacks for certain applications. However, the high performance in reasoning tasks makes them valuable for specific use cases, particularly in STEM-related fields.

How to Access the Models

Developers keen on experimenting with OpenAI’s latest offerings can access the o1-preview and o1-mini models through the public API, Microsoft Azure OpenAI Service, Azure AI Studio, and GitHub Models. OpenAI’s continuous development of both the o1 and GPT series ensures that there are numerous options for developers looking to build innovative applications.

In summary, OpenAI’s introduction of the o1 family marks a significant step in the evolution of reasoning-focused LLMs, particularly for STEM applications. While the models have some limitations in speed and input modalities, their advanced reasoning capabilities offer promising avenues for complex problem-solving tasks. As OpenAI continues to refine these models, developers can expect incremental improvements and adjustments in pricing and performance, heralding a new era of AI development.

Explore more

Is Data Architecture More Important Than AI Models?

The glistening promise of an autonomous enterprise often shatters against the reality of a fragmented database that cannot distinguish a customer’s lifetime value from a simple transaction code. For several years, the technology sector has remained fixated on the sheer cognitive acrobatics of large language models, treating every incremental update to GPT or Claude as a definitive solution to complex

Six Post-Purchase Moments That Drive Customer Lifetime Value

The instant a digital transaction reaches completion, a profound and often ignored psychological transformation occurs within the mind of the modern consumer as they pivot from excitement to scrutiny. While the majority of contemporary brands commit their entire marketing budgets to the initial pursuit of a sale, they frequently vanish the very second a credit card is authorized. This abrupt

The Future of Marketing Automation: Trends and Growth Through 2026

Aisha Amaira is a leading MarTech strategist with a profound focus on the intersection of customer data platforms and automated innovation. With years of experience helping brands navigate the complexities of CRM integration, she specializes in transforming technical infrastructure into high-growth engines. In this conversation, we explore the evolving landscape of marketing automation, the financial frameworks required to justify large-scale

How Can Autonomous AI Agents Personalize Global Marketing?

Aisha Amaira is a distinguished MarTech strategist who has spent years at the intersection of customer data platforms and automated engagement. With a deep background in CRM technology, she specializes in transforming rigid, manual marketing architectures into fluid, insight-driven ecosystems. Her work focuses on helping brands move past the technical debt of traditional automation to embrace a future where technology

Is It Game Over for Authenticity in Job Interviews?

Ling-yi Tsai has spent decades at the intersection of human capital and technical innovation, helping organizations navigate the messy realities of digital transformation and behavioral change. With a deep focus on HR analytics and talent management systems, she understands that the data behind a hire is often just as important as the cultural “vibe” a manager senses during a first