An insightful look into 'Sakana AI: An Evolved Universal Transformer Memory'

Sakana AI: An Evolved Universal Transformer Memory

Sakana AI has unveiled its groundbreaking research on Evolved Universal Transformer Memory, a significant advancement in artificial intelligence memory systems. Developed under Japan's Ministry of Economy, Trade and Industry’s GENIAC supercomputing grant, Sakana AI draws on human memory mechanisms to enhance transformer foundation models. Their innovation, Neural Attention Memory Models (NAMMs), enables transformers to efficiently store and retrieve critical information, dramatically boosting performance across various tasks without additional training. This novel memory system not only refines language models but also demonstrates cross-domain applicability, extending benefits to fields like computer vision and reinforcement learning. By evolving NAMMs with evolutionary optimization, Sakana AI has outperformed traditional handcrafted methods, showcasing zero-shot transferability and task-specific memory optimization. This pioneering work p
Contact us see how we can help

Sakana AI: An Evolved Universal Transformer Memory

Revolutionizing Transformer Memory: A Dive into Sakana AI’s Latest Breakthrough

On December 10, 2024, Sakana AI released the second installment in their series of blog posts, disclosing the groundbreaking results of their research endeavors funded by the Japanese Ministry of Economy, Trade and Industry’s GENIAC supercomputing grant. This announcement unveils the development of a sophisticated memory system for transformers, propelling advancements in efficiency and cross-domain application – a testament to Jengu.ai’s own commitment to pioneering innovation in the fields of automation, AI, and process mapping.

Inspiration from Nature: Transforming Foundation Models

Inspired by the selective nature of human memory, Sakana AI’s research embarks on a mission to reform the capabilities of transformer foundation models. Their recent projects have demonstrated notable enhancements in performance through evolutionary model merging, along with uncovering diverse agentic skills and novel applications for large language models (LLMs) in AI research. Their latest breakthrough, detailed in the paper titled "An Evolved Universal Transformer Memory," showcases a transformative memory system that enables transformers to retain crucial information while efficiently pruning unnecessary data, resulting in smarter and faster models.

"Our learned memories not only boost both performance and efficiency of existing transformers but are also universally transferable across different foundation models, even beyond language, without any re-training!"

Addressing Limitations: The Evolution of Neural Attention Memory Models (NAMMs)

Memory is intrinsic to cognitive function, allowing for selective storage and retrieval of information. Transformer models, however, traditionally lack this nuance, leading to inefficiencies in processing extended tasks. Sakana AI’s innovation introduces Neural Attention Memory Models (NAMMs), enhancing how transformers manage information, thereby achieving supercharged results across various tasks with less memory usage.

NAMMs optimize transformers to address the challenges of extended context processing, ensuring they remember relevant details while discarding redundancies. This advancement not only elevates performance but also extends the applicability of language-trained NAMMs to domains such as vision and reinforcement learning without requiring additional training.

The Technical Backbone: Evolutionary Optimization and Attention Matrices

Contrasting earlier static strategies, NAMMs employ evolutionary optimization, enabling them to refine memory management autonomously. This transformative approach utilizes attention matrices to determine the relative significance of each input token, facilitating effective memory retention across a model’s layers and allowing for seamless transferability to other transformer models.

NAMMs process attention sequences into spectrograms, compress information using an exponential moving average, and select tokens to remember based on learned classifiers. This innovative execution methodology further underscores Jengu.ai's philosophy of pushing boundaries in AI and process enhancement.

Performance and Adaptability: Evaluating NAMMs in Varied Domains

Extensive evaluations place NAMMs on top of the Llama 3 8b base model, showcasing superior results on benchmarks like LongBench, InfiniteBench, and ChouBun. Compared against hand-designed memory management methods such as H₂O and L₂, NAMMs consistently outperform by providing performance enhancements together with reduced context size without sacrificing efficiency.

The zero-shot transferability potential of NAMMs is striking, demonstrating adaptability across diverse models beyond language tasks, including computer vision and reinforcement learning frameworks such as Llava Next Video and the Decision Transformer. This ability to prune non-essential data and focus on critical information aligns with Jengu.ai's dedication to refining AI’s adaptability and efficiency in multidisciplinary applications.

Future Trajectories: Envisioning Continuous Evolutionary Advances

The introduction of Neural Attention Memory Models represents only the beginning of potential advancements in transformer memory systems. By integrating evolution with learning processes, future generations of transformers may experience unprecedented efficiency across longer data sequences and more complex tasks.

Looking ahead, Sakana AI anticipates further exploring NAMMs' capabilities, potentially enriching the training methodologies of future foundation models through iterative evolution and learning strategies, much like the complex evolution of human cognition itself.

Jengu.ai acknowledges the potential of these advancements, envisioning a future where AI memory systems reach new pinnacles of efficiency and adaptability, marking another significant step towards revolutionizing automation and process mapping globally.

"We thank the New Energy and Industrial Technology Development Organization (NEDO) and the Japanese Ministry of Economy, Trade and Industry (METI) for selecting us for the Generative AI Accelerator Challenge (GENIAC), making this breakthrough possible." - Sakana AI

Sakana AI invites aspiring innovators to glimpse into the future of AI and join their journey. Career opportunities are available for those inspired to contribute to transformational AI research and development.

© Sakana AI 株式会社

```
Contact us see how we can help