
Google has introduced a new neural network architecture called Titans, which will help machine learning systems process large amounts of data more efficiently and use long-term memory, similar to the human brain. The development of Titans is led by Google’s team headed by Ali Behrooz. The main goal is to combine short-term and long-term memory into a single model, thus overcoming the limitations of traditional architectures.
Research shows that Titans delivers excellent results in tasks related to language modeling, data analysis, genomics, and solving logical problems that require long-term memory.
The main feature of the new architecture is its ability to work with millions of data points without losing accuracy. The memory module in Titans determines what information should be retained and what can be discarded, using principles of human memory: short-term memory for current tasks and long-term memory for more complex memories.
Google has proposed three versions of Titans: MAC (Memory as Context), MAG (Memory as Gating), and MAL (Memory as a Layer), each designed to solve specific tasks related to memory storage and usage. In some tests, Titans have shown better results than traditional architectures and even many modern ones.
For comparison, ChatGPT has only 4,000 context tokens, which allows for the analysis of about 4,000 words in one session. However, Titans have around 2 million tokens, enabling them to work with entire books.