AI Struggles with Learning Flexibility, Researchers Seek Cost-Effective Fixes

A recent study conducted by the University of Alberta has revealed a significant limitation in artificial intelligence (AI) models, particularly those trained using deep learning techniques. The study found that these AI models struggle to learn new information without having to start from scratch, an issue that underscores a fundamental flaw in current AI systems. The primary problem is the loss of plasticity in the "neurons" of these models when new concepts are introduced. This lack of adaptability means that AI systems cannot learn new information without undergoing complete retraining. The retraining process is both time-consuming and financially burdensome, often costing millions of dollars. This inherent rigidity in learning poses a considerable challenge to achieving artificial general intelligence (AGI), which would allow AI to match human versatility and intelligence. Despite the concerning findings, the researchers offered a glimmer of hope by developing an algorithm capable of "reviving" some of the inactive neurons, indicating potential solutions for the plasticity issue. Nonetheless, solving the problem remains complex and costly.

Challenges of Deep Learning-Based AI Models

One of the most glaring issues identified in the study is the lack of flexibility inherent in deep learning-based AI models. Unlike humans, who can adapt and assimilate new information with relative ease, AI systems find it incredibly challenging to acquire new knowledge without compromising previously learned information. When tasked with integrating new data, these models are often forced to undergo a complete retraining process. This retraining isn’t just a minor inconvenience; it is a significant business expense, often requiring millions of dollars and heaps of computational resources. For companies relying on AI, this means both economic and operational inefficiencies, making it difficult to justify frequent updates or changes to their AI systems.

Furthermore, the loss of neural plasticity in AI models makes it difficult for them to achieve what researchers term as lifelong learning. Lifelong learning is the ability to continuously acquire and apply new knowledge and skills throughout one’s life. For AI, this would mean adapting to new data sources or user inputs in real time without the need for restarting the learning process from scratch. The University of Alberta study underscores that the current state of AI technology is far from achieving this goal. The economic implications are substantial; organizations are likely to face continual expenditure on retraining AI models, thereby stifling innovation and hindering the widespread adoption of AI technologies. This challenge poses a roadblock on the path toward artificial general intelligence, a long-term objective for many researchers in the AI field.

Preliminary Solutions and Future Directions

A recent University of Alberta study has uncovered a significant limitation in artificial intelligence (AI) models, especially those using deep learning techniques. The research indicates that these AI models struggle to learn new information without needing to start from scratch, revealing a key flaw in current AI systems. The main issue is the loss of plasticity in the "neurons" of these models when new concepts are introduced. This lack of adaptability forces AI systems into complete retraining to learn new information, a process that is both time-consuming and financially demanding, often costing millions of dollars. This inherent rigidity is a major obstacle to achieving artificial general intelligence (AGI), which aims for AI to match human adaptability and intelligence. However, the researchers provided a hopeful note by developing an algorithm that can "revive" some inactive neurons, pointing to potential solutions for the plasticity issue. Even so, addressing this problem remains intricate and expensive, representing a significant challenge for the future development of adaptable AI systems.

Explore more

A Beginner’s Guide to Data Engineering and DataOps for 2026

While the public often celebrates the triumphs of artificial intelligence and predictive modeling, these high-level insights depend entirely on a hidden, gargantuan plumbing system that keeps data flowing, clean, and accessible. In the current landscape, the realization has settled across the corporate world that a data scientist without a data engineer is like a master chef in a kitchen with

Ethereum Adopts ERC-7730 to Replace Risky Blind Signing

For years, the experience of interacting with decentralized applications on the Ethereum blockchain has been fraught with a precarious and dangerous uncertainty known as blind signing. Every time a user attempted to swap tokens or provide liquidity, their hardware or software wallet would present them with a wall of incomprehensible hexadecimal code, essentially asking them to authorize a financial transaction

Germany Funds KDE to Boost Linux as Windows Alternative

The decision by the German government to allocate a 1.3 million euro grant to the KDE community marks a definitive shift in how European nations view the long-standing dominance of proprietary operating systems like Windows and macOS. This financial injection, facilitated by the Sovereign Tech Fund, serves as a high-stakes investment in the concept of digital sovereignty, aiming to provide

Why Is This $20 Windows 11 Pro and Training Bundle a Steal?

Navigating the complexities of modern computing requires more than just high-end hardware; it demands an operating system that integrates seamlessly with artificial intelligence while providing robust security for sensitive personal and professional data. As of 2026, many users still find themselves tethered to aging software environments that struggle to keep pace with the rapid advancements in cloud computing and data

Notion Launches Developer Platform for AI Agent Management

The modern enterprise currently grapples with an overwhelming explosion of disconnected software tools that fragment critical information and stall meaningful productivity across entire departments. While the shift toward artificial intelligence promised to streamline these disparate workflows, the reality has often resulted in a chaotic landscape where specialized agents lack the necessary context to perform high-stakes tasks autonomously. Organizations frequently find