Artificial intelligence systems, and large language models (LLMs) in particular, have become almost ubiquitous. The implications are many. Among them: serious concerns about massive amounts of energy usage. The initial training of GPT-3 required as much energy as powering 120 houses for a year. That’s before adding the energy required for the chatbot to respond to users’ prompts (1–3). GPT-4 required an estimated 50 times as much energy as its predecessor to train, and although OpenAI has not divulged official figures, GPT-5 likely required much more.
To date, most efforts to improve the energy consumption of AI systems have focused on developing more efficient algorithms, increasing the use of energy from sustainable sources, or even building smaller language models. To build DeepSeek, which debuted last January, researchers in China built a model that only activates a fraction of the total model for each query. Other efforts focus on packing more transistors onto smaller chips, which results in a shorter distance for the data to travel and enables more parallel computing.
But many researchers are developing a different kind of computing architecture, one inspired by the efficient mechanisms of the brain. Dubbed neuromorphic computing, it was first proposed decades ago but has recently seen a resurgence of interest. Some believe neuromorphic approaches could offer a path to innovation that helps tackle the formidable problem of rampant AI energy consumption.
