Can OpenAI’s New o1 Models Transform STEM with Superior Reasoning?

OpenAI has recently unveiled a new family of large language models (LLMs), dubbed “o1,” which aims to deliver superior performance and accuracy in science, technology, engineering, and math (STEM) fields. This launch came as a surprise, as many anticipated the release of either “Strawberry” or GPT-5 instead. The new models, o1-preview and o1-mini, are initially available to ChatGPT Plus users and developers through OpenAI’s paid API, enabling developers to integrate these models into existing third-party applications or create new ones on top of them.

Enhanced Reasoning Capabilities

A key feature of the o1 models is their enhanced “reasoning” capabilities. According to Michelle Pokrass, OpenAI’s API Tech Lead, these models employ a sophisticated reasoning process that involves trying different strategies, recognizing mistakes, and engaging in comprehensive thinking. In tests, o1 models have demonstrated performance on par with PhD students on some of the most challenging benchmarks, particularly excelling in reasoning-related problems.

Current Limitations

The o1 models are currently text-based, meaning they handle text inputs and outputs exclusively and lack the multimodal capabilities of GPT-4o, which can process images and files. They also do not yet support web browsing, restricting their knowledge to data available up to their training cutoff date of October 2023. Additionally, the o1 models are slower than their predecessors, with response times sometimes exceeding a minute.

Early Feedback and Practical Applications

Despite these limitations, early feedback from developers who participated in the alpha testing phase revealed that the o1 models excel in tasks such as coding and drafting legal documents, making them promising candidates for applications that require deep reasoning. However, for applications demanding image inputs, function calling, or faster response times, GPT-4o remains the preferred choice.

Pricing and Access

Pricing for the o1 models varies significantly. The main o1-preview model is the most expensive to date, costing $15 per 1 million input tokens and $60 per 1 million output tokens. Conversely, the o1-mini model is more affordable at $3 per 1 million input tokens and $12 per 1 million output tokens. The new models, capped at 20 requests per minute, are currently accessible to “Tier 5” users—those who have spent at least $1,000 through the API and made payments within the last 30 days. This pricing strategy and rate limit suggest a trial phase where OpenAI will likely adjust pricing based on usage feedback.

Notable Uses During Testing

Among the notable uses of the o1 models during testing include generating comprehensive action plans, white papers, and optimizing organizational workflows. These models have also shown promise in infrastructure design, risk assessment, coding simple programs, filling out requests-for-proposal (RFP) documents, and strategic engagement planning. For instance, some users have employed o1-preview to generate detailed white papers with citations from just a few prompts, balance a city’s power grid, and optimize staff schedules.

Future Opportunities and Challenges

While the o1 models present new opportunities, there are still areas where improvements are necessary. The slower response time and text-only capabilities are significant drawbacks for certain applications. However, the high performance in reasoning tasks makes them valuable for specific use cases, particularly in STEM-related fields.

How to Access the Models

Developers keen on experimenting with OpenAI’s latest offerings can access the o1-preview and o1-mini models through the public API, Microsoft Azure OpenAI Service, Azure AI Studio, and GitHub Models. OpenAI’s continuous development of both the o1 and GPT series ensures that there are numerous options for developers looking to build innovative applications.

In summary, OpenAI’s introduction of the o1 family marks a significant step in the evolution of reasoning-focused LLMs, particularly for STEM applications. While the models have some limitations in speed and input modalities, their advanced reasoning capabilities offer promising avenues for complex problem-solving tasks. As OpenAI continues to refine these models, developers can expect incremental improvements and adjustments in pricing and performance, heralding a new era of AI development.

Explore more

Hotels Must Rethink Recruitment to Attract Top Talent

With decades of experience guiding organizations through technological and cultural transformations, HRTech expert Ling-Yi Tsai has become a vital voice in the conversation around modern talent strategy. Specializing in the integration of analytics and technology across the entire employee lifecycle, she offers a sharp, data-driven perspective on why the hospitality industry’s traditional recruitment models are failing and what it takes

Trend Analysis: AI Disruption in Hiring

In a profound paradox of the modern era, the very artificial intelligence designed to connect and streamline our world is now systematically eroding the foundational trust of the hiring process. The advent of powerful generative AI has rendered traditional application materials, such as resumes and cover letters, into increasingly unreliable artifacts, compelling a fundamental and costly overhaul of recruitment methodologies.

Is AI Sparking a Hiring Race to the Bottom?

Submitting over 900 job applications only to face a wall of algorithmic silence has become an unsettlingly common narrative in the modern professional’s quest for employment. This staggering volume, once a sign of extreme dedication, now highlights a fundamental shift in the hiring landscape. The proliferation of Artificial Intelligence in recruitment, designed to streamline and simplify the process, has instead

Is Intel About to Reclaim the Laptop Crown?

A recently surfaced benchmark report has sent tremors through the tech industry, suggesting the long-established narrative of AMD’s mobile CPU dominance might be on the verge of a dramatic rewrite. For several product generations, the market has followed a predictable script: AMD’s Ryzen processors set the bar for performance and efficiency, while Intel worked diligently to close the gap. Now,

Trend Analysis: Hybrid Chiplet Processors

The long-reigning era of the monolithic chip, where a processor’s entire identity was etched into a single piece of silicon, is definitively drawing to a close, making way for a future built on modular, interconnected components. This fundamental shift toward hybrid chiplet technology represents more than just a new design philosophy; it is the industry’s strategic answer to the slowing