Lithionic memristors & future neuromorphic - Ericsson

2022-06-10 19:19:30 By : Mr. Michael Zhang

6G networks and devices will increasingly rely on AI, requiring energy-efficient computing. Here at Ericsson, we recognize neuromorphic computing as a promising paradigm for this. We have joined forces with researchers at MIT who in their turn recognize next-generation lithium-based oxides as key building blocks of neuromorphic computing.

Senior Researcher, Device platform research

Senior Researcher, Device platform research

Junior Research Group Leader, Technical University of Munich

Senior Researcher, Device platform research

Senior Researcher, Device platform research

Junior Research Group Leader, Technical University of Munich

Senior Researcher, Device platform research

Senior Researcher, Device platform research

Junior Research Group Leader, Technical University of Munich

The past decade has witnessed fast growth in the development and use of artificial intelligence (AI) in many domains. An important driver for this is the evolution of graphics processing units and domain-specific hardware accelerators. So far, this growth has been enabled by mature complementary metal-oxide-semiconductor (CMOS) technology.

Today, the emerging paradigm of computational memory, sometimes referred to as in-memory or in-situ computing, enables implementations of AI applications that can be more efficient than the conventional ones. The key to this higher efficiency is the capability of computational memory to both store data and perform computing operations like multiplications and additions on the data. That way, data no longer need to be moved from storage to computing elements, which is a major source of inefficiency in conventional implementations of AI applications.

The Massachusetts Institute of Technology (MIT) has chosen non-CMOS ‘lithionic’ memristive devices (’lithionic’ comes from lithium-based oxides) as promising candidates for the key building blocks of such computational memories. In doing so, MIT researchers have embarked on a mission to extend the use of lithium, an element used primarily in lithium-ion batteries, from the energy storage domain into data storage and processing domains. As part of our collaboration with MIT, we are researching use cases and underlying computing architectures based on such lithionic memristive devices.

Not surprisingly, the focus of our research falls on 6G networks and their connected 6G devices, which are expected to increasingly rely on AI and machine learning (ML). For instance, so-called environment sensing is expected to be prevalent in the future, which implies that more and more sensor data will be produced and processed. Memristive computational memory can help process data within close proximity to the sensor, minimizing data movement not only inside 6G devices but also inside 6G networks. This kind of local data processing would both save energy and help to preserve sensor data privacy and contribute to improved user privacy.  Motivated by this and other examples, our ambition is to use this research as a base for efficient implementation of future AI applications and thus contribute to the sustainable progress of computing beyond CMOS in the 6G era.

Computational memory using lithionic memristive devices is still at an exploratory stage and has its challenges. The MIT-Ericsson collaboration includes four research groups at MIT and provides a unique opportunity to tackle the main challenges in an interdisciplinary, holistic way: across devices, systems and applications. This includes:

Ericsson provides its expertise in wireless connectivity as well as in such key areas like media processing, AI, ML, and hardware architectures. We use this momentum to explore the use cases for lithionic memristive technology and investigate underlying memristive architectures and techniques for achieving desired performance characteristics.  

In some near future we might witness the birth of lithionic memristive technology as a new enabler for efficient neuromorphic systems, advancing the field of neuromorphic computing. Before we dive into the specifics, let’s cover the basics.

Take a quick look at some of the key terminology used in this blog post

A memristive device is a practical implementation of an ideal memristor (memory resistor), which is a fundamental electrical component – along with resistor, capacitor, and inductor. A memristive device can only mimic the key characteristics of the ideal memristor, notably a so-called pinched current-voltage hysteresis loop. Despite some controversy about the terminology (memristor vs. memristive device), the community uses both terms interchangeably, as do we.

One way to interpret “memory resistor” is to think of a device that can change its resistance under some external stimuli and remember this change. In this sense, memristors are similar to non-volatile memory technologies like Flash, albeit comprising different materials.

Memristors can be used for both storing data and performing computation on them in the analog domain, for example, using Ohm’s law to perform multiplication. The amount of data (how many bits) each memristor can store depends on multiple factors and can be quite large, for example, eight bits per memristor. The more bits per memristor the better, and there is a trade-off among the number of bits and other parameters like the switching speed, retention, and endurance.    

One of the key arguments for building memristors from other materials than conventional memory technologies like Flash is that they can scale better (e.g., down to 10 nm or even below that). For comparison, already today CMOS has scaled below 10 nm, and 5-nm chips can be found in high-volume products. In addition to scaling better than Flash, memristors could also offer superior operational characteristics in terms of faster switching and higher endurance. If we add to these benefits long retention time and low power dissipation, it is clear that memristors can become great candidates for applications in future computing systems.

Figure 1: Illustration of oxygen-ion based memristor (left) and new Li-based memristor (right) .

The basic structure of a memristor comprises a transition metal oxide sandwiched between two electrodes, like in Figure 1(left). By applying an electric field, the resistance of the oxide can be changed by growing or dissolving so-called conductive filaments inside the oxide. This process employs oxygen, silver, or copper ions and depends on multiple factors like the magnitude and polarity of the applied electric field and the mobility of the ions. The top-down arrow indicates a potential direction of current flow. Different implementations of the process can trade-off memristor characteristics mentioned above (e.g., the number of bits, switching speed, retention, endurance).

Above all else, there are three important challenges that make it difficult to advance the development of memristors which employ oxygen, silver, or copper ions and rely on conductive filaments:

To address these challenges, innovative solutions in the field of material science are needed.

MIT researchers have focused on new materials, namely, lithium-based oxides. Such ‘lithionic’ memristors employ lithium ions (‘Li+’, see Figure 1(right)) and processes called phase separation and metal-to-insulator transition to change the resistance of the oxide. Lithionic memristors promise to overcome some of the challenges of the state-of-the-art memristors mentioned in the previous section. In addition, lithium ions can be characterized by faster mobility (compared to oxygen, silver, or copper ions) and that can potentially enable micro- to nanosecond switching.

Despite the vast knowledge of lithium application in battery technology, the application of lithium-based oxides in memristive technology is not yet understood well enough. For instance, the fabrication process of lithionic memristors for scalability and compatibility with CMOS is yet to be mastered. The development of lithionic memristors can also be guided by insights from early-stage system-level evaluations in terms of latency, power, and area. Obtaining such insights is an active area of research, too.

Memristive devices can be instrumental for future neuromorphic systems, a form of computing that could be critical for AI development.

Neuromorphic computing is synonymous with brain-inspired computing. Today there is a huge gap between the conventional computing and the human brain, and neuromorphic computing fits in this gap. In a simplified interpretation, neuromorphic computing encompasses three key properties:

Let’s walk through each of these properties before we discuss how memristors (in general, not only lithionic) can enable neuromorphic computing.

To give you an idea of the order of power efficiency targeted by neuromorphic computing, here is a simplified illustrative comparison:

The human brain performs many complex cognitive tasks with a slim power budget of around 20 watts, several orders of magnitude less than what the most power-efficient digital computing systems would require to accomplish such tasks.

Making some simplifying assumptions and doing the math, we can estimate that the volume of the brain could accommodate about 22,000 state of the art system-on-chips (SoC).

At the same time, a single such SoC can have a similar power budget like the entire brain.

This drastic gap in power efficiency can be to some extent explained as follows: Unlike the brain, conventional computers separate memory and computing into two distinct domains. Data movement between these two domains is required whenever some computation is performed, and this movement results in excessive power consumption.

Thus, the first inspiration that neuromorphic computing borrows from the nature—from the brain’s synapses, to be specific—is to co-locate data and processing (recall computational memory mentioned in the beginning of the blog post). This feature is particularly useful for running AI/ML models like deep neural networks, notorious for their large number of model parameters, which require large memory and large amounts of processing.

Let us leverage another comparison to illustrate responsiveness:

The human brain has around 86 billion neurons where each neuron is connected via synapses to about 10,000 other neurons, resulting in about 860 trillion synapses in total.

The strength of synaptic connections represents information and so, like we have mentioned, the brain performs ‘processing’ at the same place where information is located.

Combining the two factors above yields multitudes of local processing taking place at different time scales in parallel – leading to fast perception and inference even for complex cognitive tasks.

The key to adaptation is continuous learning, achieved by making some synaptic connections stronger and some weaker. A very elegant feature of the brain lies in its highly efficient adaptation through just a few training examples. In contrast, artificial neural networks rely on huge amounts of training data even for basic cognitive tasks like distinguishing a dog from a cat. Since training with few samples is not yet a mature technique for use in artificial neural networks training, continuous learning in these networks demands for highly energy-efficient hardware to make training an affordable task even in resource-constrained systems such as wearables.

So, in the light of these three ‘target’ properties, how can memristors enable neuromorphic computing? They can serve the purpose thanks to several factors:

Figure 2: Inspiration from human brain to neuromorphic chip

Figure 2 illustrates the inspiration from the human brain used in neuromorphic chips based on analog crossbar arrays. Like synapses w1-3 weigh input signals from the axon(s) of one (or more) pre-synaptic neuron(s) and the partial results are then summed via dendrite at the receiving neuron (aka post-synaptic neuron), memristors w1-3 weigh input signals coming to the crossbar array and the partial results are then summed.

Hardware implementation of biologically plausible spiking neural networks is an area where memristive technology shows great promise. However, today, the closest match to memristive technology is applications that contain lots of multiply-accumulate operations (that is, operations that first multiply two operands and then add the result to another operand). Applications that perform bulks of such operations in parallel can benefit the most from analog crossbar arrays mentioned above. One important computational kernel is called vector-matrix multiplication (VMM). Artificial neural networks and hyperdimensional computing, among other applications, heavily rely on VMM.

To give some technical details about analog crossbar arrays for VMM, let’s first go through the organization of a crossbar. It can be illustrated as a two-dimensional array (matrix) where memristors connect the horizontal wires (so-called word lines) and the vertical wires (so-called bit lines). This means that the current flowing from a word line into a bit line would depend on the conductance of the memristor connecting them, according to Ohm’s law. In other words, each memristor in a crossbar array acts as an analog multiplier of the input voltage and its own conductance.

Figure 3: Vector-matrix multiplication using an analog crossbar array.

Figure 3 illustrates VMM using an analog crossbar array. A 3x3 array is shown for simplicity (real arrays can be 512x512, 1024x1024, etc.). V is the input voltage vector, G is the matrix of memristor conductance values programmed in the crossbar array, I is the output current vector according to Ohm’s law (e.g., I11 = V1 • G11) and Kirchhoff’s law (e.g., I1= I11 + I21 + I31). Thus, the vector and matrix in this VMM are, respectively, the input voltage vector and the memristor conductance matrix. While each individual memristor leverages Ohm’s law for multiplication, the accumulation of results along each bit line leverages Kirchhoff’s law, which means that the crossbar array performs analog multiply-accumulate operations.

This implementation of multiply-accumulate operations is very different from the conventional, digital implementation and requires new design techniques. An important aspect is that the rest of the system can still be digital, which means that analog VMM would require peripheral circuitry like data converters (digital-to-analog and analog-to-digital, to be able to interface the rest of the system).

The design space of such VMM hardware is vast – it contains a large number of design points defined by a variety of parameters. It is challenging to fully explore a large design space like this and there are no industry-standard tools for that. On the other hand, the benefits from mastering VMM hardware design can be high. Thus, despite the challenges, the Ericsson team is researching this promising field and the current focus is on system architectures comprising both conventional CMOS components and analog crossbar arrays.

There are multiple exciting research areas to explore in the broad field of neuromorphic computing using lithionic memristors. At a high level the areas can be grouped as follows:

The areas have big challenges and require a creative approach driven by use cases, which makes the work difficult but at the same time rewarding!

As we discuss earlier, lithionic memristors have a great potential to serve as a key building block for neuromorphic systems. Decades of battery research on lithium-based materials may be helpful, though a fundamental understanding of how to build memristors with finely adjustable conductance is still lacking (there have only been a few demonstrations of lithionic memristors). Another challenge is memristor non-idealities due to material and fabrication specifics – some of the non-idealities cannot be hidden from the application and thus must be addressed on the algorithm level. Yet another important aspect that needs more research is the material compatibility with the CMOS process.

Researchers at MIT are making progress in understanding the properties of lithium-based oxides. This includes:

Like we mention above, the design space of hardware architectures using memristors is vast. At the ’top’ of the design space, choices depend on application performance requirements like the desired inference latency and accuracy. At the ‘bottom’ of the design space, choices depend on the characteristics of memristors and various circuits (for instance, the peripheral circuits that we mention when describing analog crossbar arrays). You may also remember our discussion about the number of bits that each memristor can store – this is an important parameter and there are architectural techniques to distribute large, multi-bit values over multiple memristors. However, the trade-offs involved are non-trivial, especially when it comes to the precision of analog-to-digital converters. Thus, careful design choices are needed to attain high efficiency in terms of power and area.

The exploration of such a design space requires appropriate tools. Although building hardware prototypes might be feasible for selected design points, simulation in software may be more suitable for the bulk of design-space exploration. MIT research teams have initiated the development of such software tools. They will be used for rapid evaluation of how algorithms like artificial neural networks map onto hardware architectures that include memristors. The goal of such evaluations is to obtain insights about trade-offs among figures of merit like inference latency, power, and area.

To recap on the big picture: neuromorphic computing fits in the gap between conventional computing and the human brain, memristors are a promising enabler of neuromorphic systems, and lithionic memristors are the choice of MIT, represented by four research groups.

Ericsson’s collaboration with MIT presents a unique opportunity to address challenges related to lithionic memristor research in a holistic fashion: from memristive devices to neuromorphic systems and AI/ML applications. Like we mention in the beginning of this blog post, our focus is on 6G networks, where bulks of computations are spread throughout the network, down to devices and even sensors. Our ambition is that this research collaboration with MIT will give us important insights about how to advance neuromorphic computing using lithionic memristors. By doing so, we hope to contribute to the sustainable progress of computing in the 6G era and we are excited about the process!

Read the Ericsson press release on the joint MIT collaboration

Learn more about Ericsson's research journey to future 6G possibilities

Learn about another Ericsson-MIT research collaboration: Zero-energy devices – a new opportunity in 6G

Read more about analog hardware accelerators in the Applied Physics Review research paper: Analog Architectures for Neural Network Acceleration Based on Non-volatile Memory

Learn more about hardware algorithm co-design in the research paper: Integration and Co-design of Memristive Devices and Algorithms for Artificial Intelligence

Read the IEEE 2021 International Roadmap for Devices and Systems - Beyond CMOS  

If you are not familiar with the topic of this blog post, here is an overview of some key terminology:

Conductance: A measure of how easily electrical current flows through a material. Its reciprocal counterpart is electrical resistance.

Non-volatile: this term has been historically used in the context of memory cells to describe the property of keeping the state (i.e., ‘remembering’ the data) when power is removed. In the case of memristive devices, the state is represented either by resistance or conductance, both of which are reciprocal. So we can program a memristive device to represent a specific resistance, and if that resistance does not change upon removal of power, the memristive device can be called non-volatile. However, the resistance may still change over time regardless of power, and for that there are other terms (for instance, resistance drift).

Retention time: the period after programming during which the memory-cell state can be reliably sensed; after that period the state will be lost due to state change over time. Non-volatile memory technologies like Flash used for storage can have retention about 10 years; memristive devices can have a much shorter retention.

Endurance: the number of times the state can be changed reliably, before material failure.

Switching: the act of programming a new state; that is, changing the resistance of a memristive device from one value to another.

Linearity: refers to how linear the function to control the state change is. For instance, if we want to increase the resistance of a memristive device, linearity describes how linear the increments would be depending on the place within the dynamic range and the magnitude of the programming pulse.

Symmetry: refers to how symmetrical the state changes are when increasing or decreasing the state by the same magnitude of the programming pulse.

Stochasticity: the distribution of the actual programmed state around the intended one. Due to material properties, the actual programmed resistance of a memristive device is a random variable with an expected value equal to the resistance intended at the programming time.

Short-term plasticity: the tunability of a biological synapse is referred to as plasticity. When implemented using a memristive device, it simply means the tunability of the device’s conductance. The “short-term” part comes from the property that when the device is exposed to low-rate programming pulses, its state may drift to low conductance relatively quickly (i.e., within a short-term).

Accumulative behavior: this is a fundamental property of a memristive device that describes the evolution of the device’s conductance according to the history of programming pulses applied to the device. In effect, this property accounts for device memory.

Like what you’re reading? Please sign up for email updates on your favorite topics.

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.

Phone: +1 972 583 0000 (General Inquiry) Phone: +1 866 374 2272 (HR Inquiry) Email: U.S. Inside Sales

Modern Slavery Statement | Privacy | Legal | Cookies  | © Telefonaktiebolaget LM Ericsson 1994-2022

You seem to be using an old web browser. To experience www.ericsson.com in the best way, please upgrade to another browser e.g., Edge Chromium, Google Chrome or Firefox.