Share:

Email-sized artificial brain: how monkey neurons helped scientists create a tiny, efficient AI

An artificial brain that fits in an email attachment might sound like something out of a movie, but that is exactly what a group of researchers just pulled off — and the secret behind this feat involves monkey neurons. The study, published in the journal Nature, shows how scientists managed to compress a computer vision model from 60 million variables down to just 10 thousand, creating a compact artificial intelligence that still accurately simulates how part of our brain processes images.

To put that in perspective, the human brain runs on less energy than a regular light bulb, while today’s large AI systems devour absurd amounts of electricity to perform tasks we do on autopilot, like recognizing a familiar face or telling a mango from an avocado at the grocery store. That massive gap in efficiency is precisely the problem the team decided to tackle head-on.

The work was led by Ben Cowley, an assistant professor at Cold Spring Harbor Laboratory, in collaboration with researchers from Carnegie Mellon University and Princeton University. Cowley describes the result as something incredibly small, so compact it could be sent in a tweet or an email. But the reduced size is not the only highlight — the compressed model also seems to work in a way that is more similar to an actual biological brain, which could have profound implications for both neuroscience and the future of artificial intelligence.

How monkey neurons became the map for a smaller AI

The central idea behind the study starts from an elegant premise: if the human visual system already solves image recognition problems with incredible efficiency, why not use it as a reference to slim down artificial intelligence models? The researchers went beyond theory and collected real neural activity data from the visual cortex of rhesus macaques, which share a brain structure very similar to ours. These detailed recordings of how neurons respond to different visual stimuli became the guide for deciding which variables in the original model were truly important and which could be discarded without significant performance loss.

The specific focus was on a brain region containing cells called V4 neurons. According to Cowley, these neurons are responsible for encoding colors, textures, curves, and complex shapes he calls proto-objects — those intermediate visual structures the brain processes before arriving at full object recognition.

The process works roughly like this: instead of training a massive model from scratch and hoping it learns to be efficient on its own, the scientists used neural activation patterns as a sort of biological filter. Each variable in the model was evaluated based on its ability to reproduce the responses recorded in real neurons. Parts of the model that were redundant or unnecessary were progressively eliminated. The team also applied statistical techniques similar to those used to compress digital photos. The result was a drastic reduction — from 60 million to 10 thousand variables — without the model losing its ability to simulate how the visual system behaves when presented with real-world images.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

This approach represents an important shift in mindset within the artificial intelligence field. In recent years, the dominant trend has been to stack more and more parameters, more layers, and more training data, following the logic that bigger models automatically deliver better results. What this study demonstrates is that nature found far more elegant solutions over millions of years of evolution, and copying those solutions can be a powerful shortcut to creating a compact artificial intelligence that actually works.

What artificial neurons revealed about how we see the world

One of the coolest findings from the study is that, because the compressed model is so small and simple, it finally let researchers peek inside and see what its artificial neurons were actually doing. In models with millions of variables, understanding the role of each component is virtually impossible. With only 10 thousand, the task became much more feasible.

And what they found is fascinating. Some artificial V4 neurons responded strongly to shapes with sharp edges and lots of curves — exactly the kind of shape you find in the produce section at the supermarket. Cowley described it in a pretty fun way: when you walk into a grocery store and see all that fruit arranged on display, your V4 neurons love it. They love all those curves from the apples and oranges sitting right there 🍎🍊

Other V4 neurons in the model seemed to respond specifically to small dots in an image. For the researchers, this discovery was particularly interesting because primates, including humans, are naturally drawn to eyes. The presence of neurons specialized in detecting small dots could be a key piece of the mechanism that makes us instinctively locate and focus on the gaze of other people and animals.

This specialization of V4 neurons may help explain how human and other primate brains manage to make sense of what they see without relying on massive computational power. Each neuron does not try to process everything at once — instead, different groups specialize in specific visual aspects, creating a distributed and highly efficient system.

Implications for neurological diseases and brain research

Beyond the advance in artificial intelligence, the compact model could become a valuable tool for neuroscience. Cowley points out that a model operating in a way more similar to a biological brain could help scientists study what goes wrong in neurodegenerative diseases like Alzheimer’s. If the model faithfully replicates the mechanisms of the visual system, researchers could simulate different types of neural degradation and observe how visual processing is affected, without relying exclusively on patient studies.

Mitya Chklovskii, a group leader at the Flatiron Institute of the Simons Foundation and a professor at NYU, who was not directly involved in the study, reinforces that compact biology-inspired models could lead to artificial intelligence that is more powerful and more human-like. If the model truly replicates strategies found in nature, it could help scientists understand the internal mechanisms of the human brain in a way that giant, opaque models simply cannot.

Chklovskii also makes an important observation about the limitations of current AI systems. He notes that a person can easily recognize a friend’s face in any environment and from multiple angles, even if that friend got a tan or a different haircut. AI systems still struggle with this kind of task, even when powered by supercomputers. According to him, this may be happening because current AI models were built on an understanding of the human brain that dates back to the 20th century. Since then, neuroscience has learned much more about how the brain actually works, and it might be time to update the foundations of artificial networks.

What this means for the future of AI model efficiency

Compressing a computer vision model to such a small size opens doors that go well beyond scientific curiosity. When we talk about model efficiency, we are talking about direct impact on operational costs, energy consumption, and technology accessibility. A model with 10 thousand variables can run on devices with limited processing power, like budget smartphones, embedded sensors, and portable medical equipment. This means that computer vision applications that currently depend on expensive cloud servers could, in theory, run locally on everyday devices without needing an internet connection and without draining the entire battery in minutes.

Cowley also mentions some very concrete practical applications. Self-driving cars, for example, could run on less powerful computers and still correctly distinguish a pedestrian from a plastic bag blowing through the air. That distinction might seem trivial to a human, but it is exactly the kind of challenge computer vision systems face every day — and solving it with less hardware means cheaper, safer, and more accessible vehicles.

Another point worth paying attention to is the environmental angle. The massive data centers powering today’s most popular AI models consume so much energy that they have already become a topic of public debate. Tech companies are investing billions in energy infrastructure just to sustain the growth of these systems. An artificial brain that delivers comparable results with a tiny fraction of the computational resources is not just a technical achievement — it is a concrete answer to one of the industry’s biggest practical challenges. If this approach inspired by the visual system can be generalized to other types of tasks beyond image recognition, the impact on reducing AI’s global energy consumption would be significant.

Transparency and interpretability: the unexpected bonus of the compact model

The researchers also highlight that the compressed model is not just smaller, but also more interpretable. Models with millions of variables function as black boxes where it is virtually impossible to understand why a specific decision was made. With only 10 thousand variables aligned to real neural activity patterns, it becomes much more feasible to investigate what each component of the model is doing and why.

Tools we use daily

This has direct implications for areas where transparency is essential, such as medical image diagnosis, surveillance, and automotive safety systems. The compact artificial intelligence derived from this study is not just lighter — it is potentially more trustworthy because its internal mechanisms are easier to audit and understand. At a time when regulators around the world are creating legislation to require explainability from AI systems, having models that naturally lend themselves to this kind of analysis is a massive competitive advantage.

Biology as a blueprint for the next generation of AI

What makes this work especially interesting is the confirmation of a hypothesis that has been circulating behind the scenes in artificial intelligence research for years: that biology can serve as an efficient shortcut for solving computational engineering problems. The monkey neurons used as a reference in this study were not chosen at random. The primate visual cortex is one of the most studied brain structures in neuroscience, and decades of research have already mapped in detail how different regions respond to edges, textures, shapes, and complete objects. By converting that accumulated knowledge into practical constraints for model training, the scientists essentially turned years of neuroscience into direct computational efficiency gains.

This line of research also raises fascinating questions about the limits of compression. If it was possible to reduce a model from 60 million to 10 thousand variables while maintaining fidelity to the visual system, how far can we go? Are there even more compact representations that capture the essence of biological visual processing? And can this same strategy be applied to other sensory modalities, like hearing or touch? The study’s authors acknowledge there is still a long road ahead, but the initial results suggest that nature operates with a level of information compression that software engineering has barely begun to explore 🧠

As Cowley put it bluntly: if our brains have less complex models and still manage to do more than these AI systems, that tells us something about our AI systems. In other words, they could probably be smaller and simpler and still do a better job interpreting what they see.

For now, what is clear is that the race toward ever-larger models may not be the only viable path forward for artificial intelligence. The artificial brain presented in this study shows that looking inward — literally, inside the brains of primates — can reveal organizational principles that make technology both more powerful and more accessible. In a landscape where the computational cost of AI has become a real barrier for researchers, startups, and entire countries, the promise of a compact artificial intelligence that delivers robust results with minimal resources is not just exciting. It is necessary.

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

Performance and Growth: Nvidia, AI Agents, and Data Centers

Nvidia accelerates revenue with data centers, GB300 NVL72, and Rubin; efficiency and AI Agents demand drive record growth and profit.

AI and Copyright: Supreme Court Denies Copyright Protection for Artistic Creation

Supreme Court rejected the AI-generated art case; in the US only humans can hold authorship — a direct impact on

AI Reveals the Identity of Anonymous Social Media Users

Vulnerable anonymity: how modern AI unmasks social media profiles and why this threatens your online privacy.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.