Next Article in Journal
Evaluating Ensemble Learning Mechanisms for Predicting Advanced Cyber Attacks
Next Article in Special Issue
Identifying Correlated Functional Brain Network Patterns Associated with Touch Discrimination in Survivors of Stroke Using Automated Machine Learning
Previous Article in Journal
Anti-Obesity and Anti-Diabetic Activities of Fermented Schizandrae Fructus Pomace Extract in Mice Fed with High-Fat Diet
Previous Article in Special Issue
Artificial Neural Networks for a Semantic Map of Variables in a Music Listening-Based Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neuromorphic Analog Machine Vision Enabled by Nanoelectronic Memristive Devices

1
1D-Lab, Vladimir State University, Gor’kogo Street 87, Vladimir 600000, Russia
2
Research and Education Center “Physics of Solid-State Nanostructures”, National Research Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod 603022, Russia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13309; https://doi.org/10.3390/app132413309
Submission received: 30 September 2023 / Revised: 4 December 2023 / Accepted: 14 December 2023 / Published: 16 December 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) in Neuroscience)

Abstract

:
Arrays of memristive devices coupled with photosensors can be used for capturing and processing visual information, thereby realizing the concept of “in-sensor computing”. This is a promising concept associated with the development of compact and low-power machine vision devices, which is crucial important for bionic prostheses of eyes, on-board image recognition systems for unmanned vehicles, computer vision in robotics, etc. This concept can be applied for the creation of a memristor based neuromorphic analog machine vision systems, and here, we propose a new architecture for these systems in which captured visual data are fed to a spiking artificial neural network (SNN) based on memristive devices without analog-to-digital and digital-to-analog conversions. Such an approach opens up the opportunities of creating more compact, energy-efficient visual processing units for wearable, on-board, and embedded electronics for such areas as robotics, the Internet of Things, and neuroprosthetics, as well as other practical applications in the field of artificial intelligence.

1. Introduction

The desire to create machines that are capable of seeing the world around them the way that people see it has motivated the development of computer vision systems for many years. An interesting fact is that the first neurocomputer in the history of mankind, created by Frank Rosenblatt in 1958 [1], was designed specifically to solve computer vision problems—in particular, to recognize the letters of the English alphabet. The work of F. Rosenblatt and many other pioneers in the field of neural network theory, more than half a century ago, introduced the high potential of artificial neural networks (ANNs) for solving computer vision problems; but, at that time, they did not receive intensive development for a number of reasons and were forgotten for many years, known in history as the winter of artificial intelligence.
Currently, computer vision systems are designed in accordance with the conventional principles of creating computing systems with the von Neumann architecture. They have digital processor units (e.g., the RISC (reduced instruction set computer), ARM (advanced RISC machine), or VLIW (very long instruction word) instruction set architectures for DSP (digital signal processor) and VPU (vision processing unit) microprocessors or GPUs (graphics processing units)) for processing visual information, memories for storing commands and data, as well as input devices in the form of photosensors (e.g., CCD (charge-coupled device) or APS (active-pixel sensor) image sensors) with analog-to-digital converters. During the operation of such a system, the captured image of a scene from the outside world undergoes digitization, encoding, software preprocessing, and processing using a machine learning model like ANNs (e.g., ResNet, VGG, Yolo, etc.), which involves storing a large number of parameters of the model itself and making a huge number of memory requests during the model’s inference on digital processor units with serial principles of data processing (albeit multicore). Such systems require the use of specialized computers, which makes them complex, thus consuming a lot of energy and making them expensive.
Analyzing the published reviews [2,3], one can find out that memristor-based systems have advantages over modern transistor-based digital processing units in tasks related to the hardware implementation of ANNs. In particular, existing memristor-based chips make it possible to solve image processing problems with a high accuracy, while consuming two to three orders of magnitude less energy than GPUs and digital neuromorphic processors and using the chip area more efficiently. So, the authors of these papers have shown that matrix multiplication, which is the most frequent operation in ANN algorithms, can be performed in an analog form based on Ohm’s and Kirchhoff’s laws with a high level of parallelism, and this makes it possible to create systems that have a high performance and speed and consume little energy.
At the same time, the need for analog-to-digital and digital-to-analog conversions minimizes the potential energy gain from using memristors. Due to the fact that memristive devices open up opportunities for creating neuromorphic systems in which all processing takes place in an analog form, it seems reasonable to exclude analog-to-digital and digital-to-analog conversions from machine vision systems. In this case, the signals from the photosensors can be fed to a memristor-based ANN and processed without digitization, because in the pretrained ANNs, the memristor’s conductivities form the model of visual information processing and simultaneously perform this processing. Moreover, if we fed visual signals directly to a memristor-based SNN, its training could be performed during the operation like in biological systems. In this paper, we propose a new architecture of memristors based on neuromorphic analog machine vision systems, relying on these musings.

2. State-of-the-Art Studies

2.1. Memristor-Based Systems for Machine Vision

The memristor, as the fourth passive element of electrical circuits, was proposed in 1971 by Prof. Leon O. Chua [4]. A little later [5], in 1976, Prof. Leon O. Chua proposed a generalized definition of the memristor and described it with a port equation equivalent to Ohm’s law and a set of state equations that describe the dynamics of the internal state variables. The start of the active study of memristors and systems based on these equations can be considered to be in 2008, when the article [6] was published. Currently, memristors are used to create computer memories ReRAM (Resistive Random Access Memory) [7] and CAM (Content-Addressable Memory) [8], hardware implementation of ANNs [9,10], and neuromorphic systems [11,12] (“in-memory computing”). It is important to note that a main feature of these devices is that the signals inside them are processed in an analog form.
The authors of article [13] were among the first to propose an image recognition system based on passive crossbar arrays of 20 × 20 memristors [13]. This is a multilayer feed-forward ANN, trained to recognize images of letters of the Latin alphabet with an accuracy of approximately 97%. This work has shown the potential of using such technologies in the field of creating machine vision systems. This is also confirmed by significant advances in the use of memristors for pattern recognition via convolutional ANNs, reservoir computing models, and ANNs with artificial dendrites, as demonstrated in [14,15,16,17]. For example, popular models including VGG-16 and MobileNet have been successfully implemented and tested on an ImageNet dataset [14] and it was shown that for memristor-based ANNs, the power consumption is more than three orders of magnitude lower than that of a central processing unit and is 70 times lower than that of a typical application-specific integrated circuit chip [17]. In addition to traditional ANN architectures based on memristors, the authors of articles [18,19] presented and shared a human retina simulator that can be used in the development of promising variants of analog vision systems, among others.
The hardware implementation of a Hopfield network based on memristor chips and the result of its operation as an associative memory is described in article [20]. The authors have shown that by using both asynchronous and synchronous refresh schemes, complete-emotion images can be recalled from partial information. In [21], the authors demonstrated the processing of a video stream in real time with the selection of object boundaries using a 3D array of memristive devices based on HfO2, which is designed for programming the binary weights of four convolutional filters and parallel processing of input images. Their achievements in the field of hardware implementation of ANNs for a wide range of image processing tasks based on arrays of memristive devices with a size of 128 × 64 are presented in [22,23], and they have also shown that such devices are several times superior to graphics and signal processors in terms of speed and low power consumption.
The results of comparing systems based on memristors with modern ANN hardware accelerators based on transistors for various indicators are given in the reviews [2,3,24,25,26,27,28]. It can be seen from these articles that despite the analog principles of information processing, the prototypes of such systems provide high accuracy of the inference of the ANN models in image recognition tasks; for example, the NeuRRAM chip [29] provides a 99% accuracy on the MNIST dataset and an 85.7% accuracy on the CIFAR-10 dataset, 1 Mb resistive random-access memory (ReRAM) nvCIM macro [30] provides an inference accuracy of 98.8% on the MNIST dataset, 2 Mb nvCIM macro [31] provides a 90.88 % accuracy on the CIFAR-10 dataset for ResNet-20 and a 65.71 % accuracy on the CIFAR-100 dataset for ResNet-20, etc.
In order to provide compatibility with modern digital IT infrastructure, systems based on memristive devices must have interfaces that make it possible to receive digital input data and return the results of their analog processing in digital form. For this reason, all the above prototypes of neuromorphic systems based on memristive devices [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32] contain analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) to interact with digital devices. Such sets of DACs and ADCs can become the bottleneck of memristor-based computing system architectures, like the computer memory bus is the bottleneck of the von Neumann architecture, which reduces the potential benefits of their applications for solving specific practical problems set before existing computers. For example, if the system uses a 16 × 16 memristor crossbar array, then 256 scalar multiplications can be performed in one clock cycle, provided that all 16 input values are supplied simultaneously. At the same time, in order to push the inputs into the DAC and then pull the outputs from the ADC it is necessary to take at least 32 clock cycles, which is disproportionately more than the requirements of performing matrix multiplication on the crossbar array.
That is why an actual area of research in this field is the development of architectures in which there can be as few signal conversions as possible. For example, the importance of developing such concepts was stated by the authors of article [8] in 2020, who proposed an analog CAM circuit taking advantage of the analog memristor conductance tunability for the first time. It enables the processing of analog sensor data without the need for an analog-to-digital conversion step. In their study, a practical circuit implementation composed of six transistors and two memristors was demonstrated in both an experiment and a simulation. The authors note that the analog capability opens up the possibility for directly processing analog signals acquired from sensors, and is particularly attractive for Internet of Things applications due to the potential low power and energy footprint.
The idea of fully analog computing is not new and existed long before the invention of memristive devices in 2008, and even before their theoretical description in the 70s. For example, the US patent [33] published in 2006 discloses an analog CAM described in 1991 [34] that employs analog storage cells with programmable analog transfer function capabilities. The use of electrically erasable programmable read-only memory (EEPROM) cells in an analog storage device avoids the need to convert an analog waveform into a digital representation, reducing the complexity embodied in an integrated circuit as well as decreasing the die dimension. An earlier well-known example of a fully analog computer is “Sceptron”—the device for analog signal recognition, developed in 1962 [35]. Sceptron not only performed a function that would require hundreds of conventional filters and a large storage capacity at that time, but it ran a program itself for recognizing a complex signal without detailed knowledge of its characteristics. If one considers analog computers with neural network architecture, then one of the most famous is the adaptive Adaline neuron, implemented in 1962 [36] using “memistors” (not to be confused with “memristors”). With such an element it was possible to obtain an electronically variable gain control, along with the memory required for storage of the system’s experiences of training. The authors created a neuron capable of training and recognizing 3 × 3 patterns of the letters of the English alphabet. Of course, the earliest example of a fully analog ANN is the Mark-1 neurocomputer created by Frank Rosenblatt [1] in 1958, which was mentioned above.
Due to the fact that the main goal of machine vision systems is not to capture and save images, but to obtain information about objects in the field of vision (e.g., class, segment, id., etc.), for the most optimal exploitation of computational resources all data processing in such systems can be performed completely in analog form, as was proposed in [8]. In the photosensors of computer vision systems, information is captured in analog form, and it is reasonable to connect them to the memristor-based ANNs without ADCs and DACs [37], implementing the concept of “in-sensor computing” [38]. As a result, there is no need to use software algorithms and models interacting with computer memory to load model parameters and store intermediate results. The system requires significantly fewer electronic components, becoming potentially faster and more energy efficient. In general, any electronic device capable of generating current (or changing the current strength in a circuit) depending on the intensity of illumination can be used as a photosensor in such a system (Figure 1).
In this paper, in order to illustrate the proposed architecture, we have considered the use of photodiodes connected with memristive devices for making the sensory part; however, there are alternative ways to achieve this. Over the past several years, great progress has been provided in photo-sensing memristors based on SiOx RRAM devices [39], MoS2 photosensitive field effect transistors [40], 2D materials-based photo-memristors [41], thin carbyne–gold films [42], etc. Comprehensive analyses of state-of-the-art memristor-based sensors for edge detection, mean filtering, stylization, and recognition have been recently published in [43]. The coupling of memristors with photosensors shows that this approach can simulate some retinal functions [44,45].
Some ways in which one can integrate the array of photosensors with the array of memristors have been proposed in [43,46]. In particular, the authors of [43] proposed a 1D1M vision sensor’s schematic layout for CMOS-compatible silicon nitride (SiNx) memristive devices and demonstrated its use for image capturing and mean filtering. The authors of [46] proposed a reconfigurable three-dimensional hetero-integrated technology for vertically stacking a diverse range of functional layers such as sensors, processors, and memory of a single-material system (for example, silicon). Such decisions could provide energy-efficient sensor computing systems for edge computing applications.

2.2. Memristor-Based Spiking ANNs

Throughout the history of its development, the theory of the ANN has been inspired by the results of studies on the principles of the functioning of biological neural networks (BNNs) [47]. The most powerful tool for solving computer vision problems currently is the convolutional ANN. Despite the fact that when developing the concept of convolutional ANNs, the peculiarities of the functioning of the visual cortex of the cerebral hemispheres, which has a parallel analogue nature of processing signals from the optic nerve, were taken into account, the same strict mathematical transformations occur in convolutional ANNs as in any other formal ANN architecture–matrix multiplication and activation. The weights of synapses in such ANNs are calculated using the backpropagation method, and visual information is encoded using the amplitude of the signal.
Inside the BNNs, information is transmitted through a network of neurons that have some activation potential [48], using signals called spikes. Spikes are transmitted from one neuron to another using an axon and are characterized by their frequency, duration, and amplitude. Contacts between neurons are formed at strictly defined points called synapses. It is currently believed that the phenomena of memory and learning in living organisms arise due to the mechanisms of synaptic plasticity, which consists of the possibility of changing the strength of the synapse and, accordingly, changing the parameters of transmitted signals. The existence of synaptic plasticity leads to the fact that the human nervous system can independently configure individual groups of neurons to perform various functions.
The resistance of memristive devices to changes within the boundaries of the minimum and maximum possible values is referred to as a LRS (low resistance state) or a HRS (high resistance state). The curve describing the characteristics of the transition of a memristive device from one state to another under the influence of voltage pulses with different shapes, amplitudes, and frequencies is similar in appearance to that of experimental measurements of synaptic plasticity in the BNNs [49]. Therefore, memristive devices are the most biosimilar artificial analogues to the synapses of neurons in living systems, and thus make it possible to implement neuromorphic ANN architectures in hardware.
In spiking ANNs (Figure 2A), memristive devices connect presynaptic and postsynaptic neurons. The presynaptic neuron in the input layer of the network acts in this case as a generator of spikes, the frequency or timings of which encode the input information. For example, to process grayscale images, the brightness values of each pixel, represented by numbers from 0 to 255, can be represented by different frequencies ranging from 1 to 100 kHz. Spikes applied to a memristive device over a period of time change its resistance (Figure 2B)—this is a type of self-learning based on the local rules for each synapse.
A postsynaptic neuron is a device capable of accumulating charge from all presynaptic neurons (Figure 2C), taking into account the voltage drop across the memristors (the so-called neuron membrane). It has a certain threshold charge value, exceeding which leads to the generation of a spike by the postsynaptic neuron. The final distribution of resistances of memristive devices determines the functionality of such a network of presynaptic and postsynaptic neurons and allows the solving of problems in the field of robotics, prosthetics, telecommunications, etc.
In neuroscience, there are currently several mathematical models of spiking neurons [50]: Hodgkin–Huxley [51], FitzHugh–Nagumo [52], Izhikevich [53], Koch and Segev [54], Bower and Beeman [55], Abbott [56], etc. The differences between these models lie in their degree of biological realism, the type of object being modeled (presynaptic neuron, neuronal membrane, neuron’s morphology, etc.), and in their level of computational complexity. The latter property is very important for the hardware implementation of neuromorphic processors, since it directly affects the number of components used, the complexity of electrical circuits, and their energy consumption. The research in [11,17,57,58,59,60,61,62] shows that even simple spike shapes (rectangular or triangular) and neuron models (leaky integrate-and-fire) make it possible to solve recognition problems in memristor-based SNNs, reducing energy consumption and increasing robustness [59] in comparison with formal ANNs. Unsupervised learning in such systems is based on the STDP mechanism and provided by the overlapped waveform of pre-spikes and post-spikes within a certain time window, which determines the behavior of memristive devices during the feedback process [11,58,60].
To generate presynaptic spikes, specialized electrical circuits based on analog switches, operational amplifiers, and current mirrors, such as in [63,64,65], have been developed. They are controlled digitally and externally using an FPGA or microcontroller, initiating spike generation based on the input data. With this approach, the spike generator acts as a device that encodes the input digital information received by the computer through an external interface in the form of a sequence of pulses represented in analog form. This makes it possible to encode any information—tabular, visual, audio, etc. However, in most papers, either the electrical circuits of spike generators are not considered in detail [10,13,15,16,20,22,57,66,67], since the authors pay more attention to signal processing inside the crossbar arrays, or multi-channel DACs are used as a generator while also having external control from an FPGA or microcontroller, as in [9,24,29,61]. Such systems have versatility with respect to the type of information being processed, but require external control and interfaces for receiving data.
Thus, memristive devices make it possible to implement in hardware artificial synapses for different ANN types [57,66], e.g., for the formal ANNs, in which the processed information is encoded according to the signal amplitude, and for the spiking ANNs, in which the processed information is encoded according to the signal frequency. However, spiking ANNs are more biologically plausible, because they make it possible to realize unsupervised learning in terms of an STDP mechanism or paired-pulse facilitation [68,69]. So, on the way of bringing together “in-sensor computing” and “in-memory computing”, the most prospective system’s architecture must be based on the spiking ANN architectures.

3. The System’s Architecture

The proposed system’s architecture includes two main parts—sensory and neural. The sensory part is designed to capture and encode visual information. The signals from the sensory part, after being captured, are transmitted to the neural part for processing. The electrical circuit of the sensory part is presented in Figure 3 for two versions of ANNs—formal (Figure 3A) and spiking (Figure 3B).
The input channel for a formal ANN operates in two stages. At the first stage (Figure 3A, on the left), the SPDT key breaks the connection of the input channel with the memristor ANN and connects it to the circuits for supplying initialization and programming pulses. At this stage, an initialization pulse Vinit is sent to the ANN, erasing information about the weights of neuron synapses. Next, synaptic weights are recorded by applying Vwrite recording pulses. At this time, the shutter of the optical system is closed and no light enters the sensor part. It is necessary to avoid the influence of illumination on the unlearned model during the write mode. After the weights of the neuron synapses are written into the neural part, SPDT keys connect the ANN circuits to the circuit of input channels. The ANN then becomes ready to process visual information, and the entire system enters the second stage of operation.
At the second stage of operation, visual information is processed. The shutter of the optical system opens at a given frequency (fps), and at this time, light falls on the photodiodes in the input channels and photocurrent iph begins to flow in the circuit from the cathode to the anode. The sensitivity of the sensor can be increased by applying a voltage Vbias. During periods of shutter opening, the photocurrent is converted into voltage pulses with different amplitudes—the stronger the illumination, the higher the voltage amplitude. Thus, visual information is encoded with the amplitude of the signal. One impulse equals one interference. Voltage pulses are supplied to the neural part of the first layer of the ANN and so on through the entire network, thereby processing visual information completely in analog form. At the output of the neural part, a signal is generated in accordance with the ANN model recorded in the memristors. With the correct selection of the load resistance Rload of the input channel, the maximum amplitude of the operating voltage (the maximum voltage during the inference) will not be greater than the threshold voltage of the memristor.
The input channel for the spiking ANN (Figure 3B) additionally has an integrator with a threshold. The integrator’s task is to accumulate charge and generate a pulse of unit amplitude at those moments of operation when the accumulated charge in the input channel exceeds the threshold value. Thus, in the input channels, the signal amplitude is converted into frequency, or, more precisely, the illumination is converted into the pulse frequency. The higher the illumination of the input channel, the higher the frequency of voltage pulses supplied to the ANN. In this case, the processing of one visual image (inference) will no longer be performed in one clock cycle, but over a given time interval during which the shutter of the optical system is open.
Regardless of the input channel circuit option, the signal encoding visual information is fed to the neural part without digitization. In the neural part, memristors act as synapses. Moreover, memristors make it possible to implement in hardware the synapses of traditional formal ANN architectures (multilayer perceptron [13]⁠, Hopfield network [20], deep ANNs [14,70]⁠, convolutional ANNs [16,24,71], LSTM networks [72], etc.), in which input information is multiplied by a pre-programmed weight, and synapses for spiking ANNs, in which the memristor exhibits mechanisms of synaptic plasticity similar to living BNNs [73,74].
In the simplest case, one synapse and the operation of scalar multiplication of two numbers can be implemented on one memristor using Ohm’s law (Figure 4A). For the input value x, it is necessary to set the equivalent input voltage Vin and apply it to a circuit with a memristor having a pre-recorded resistance value Rm. The resistance Rm in memristive devices can be changed within the boundaries of the minimum (LRS or Rmin)- and maximum (HRS or Rmax)-possible resistances. Then, a current I will flow in the circuit, which can be converted into an output voltage Vout, for example, through a load resistor or a transimpedance amplifier (TIA). The resistance of the load resistor RL or the negative feedback resistance of the TIA Rf, together with the memristor resistance Rm will form the weight of the synapse.
With this approach, only unipolar weights can be implemented, since the resistance cannot be negative. For bipolar weights, it is necessary to use circuits with two or, for example, four [75] memristors (Figure 4B–D). In the variants of circuits presented in Figure 4, the weights of the neuron synapses will already have bipolar values due to the difference in several resistances of the memristive devices. In the circuit in Figure 4B, this is achieved, in particular through the use of an additional differential amplifier, the output of which is the difference in voltage obtained by converting the current in each branch containing the memristor. In the circuit in Figure 4C, this is achieved by duplicating the inputs, one of which is inverse. In the circuit in Figure 4D, the output voltage is differentially taken from two voltage dividers formed by four memristors. The shape of the graph of the dependence of weight on resistance for the first case is hyperbolic, and for the second, it is linear.
Using the circuits of the input channels (Figure 3) and synapses (Figure 4), it is possible to create various variants of formal and spiking ANNs for machine vision systems. Figure 5A shows a variant of the formal ANN. The light falls on the photodiodes of the input channels and forms a different amplitude of the input signals Vin depending on the illumination. These signals, without digitization, are fed to the inputs of the neural part implemented in the crossbar array of the memristive devices. The resistances of the memristors in the crossbar array are pre-programmed to perform the work of one of the ANN models for processing visual information. Figure 5B shows a variant of a spiking ANN. The light falls on the photodiodes of the input channels and forms a different frequency of spikes of the input signal Vin, depending on the illumination. These spikes, without digitization, are fed to the inputs of the neural part, eventually changing the conductivity of the memristors according to the STDP rule. In this way, the ANN is trained, and will subsequently perform a specific task of processing visual information.

4. Experiments and Results

Several computer models have been developed to verify the architectures proposed here. The operation of the sensory part of a formal ANN can be demonstrated using a SPICE model (Figure 6). This model contains the following main components: an equivalent photodiode circuit, containing a photocurrent source; a memristor model [76]; an initialization and recording signal generator; and switching circuits. The signal generator allows you to supply an initialization signal to the memristor in the form of a rectangular pulse with a given amplitude and duration (init) and a recording signal in the form of a sequence of rectangular pulses with increasing amplitudes (write). For inference, a signal from the sensor is generated at the output of the U3 microcircuit in the form of rectangular pulses with a given duration.
The switching of generator signals is performed by switches S1–S6 by supplying a high level of control signals Vki for initialization, Vkp for recording, and Vkws for processing visual information. The stages of the circuit operation described in Section 3 are switched by control signals Vks, Vksi (high), and Vkw (low) for initialization, and Vks, Vkw (high), and Vksi (low) for recording the ANN model (stage one) and for inference (stage two). The current in the circuit with the memristor is converted by the transimpedance amplifier U5 into voltage to control the recording and transferal of processed data to subsequent layers of the ANN.
The graph in Figure 6 is plotted for the current in the memristor circuit at point TE. When an initialization signal of duration tinit is applied, the memristor is transferred from an LRS to an HRS (RESET). Next, when a write signal is applied during the time twrite, the memristor is programmed to the target resistance RT. This ends stage one. In the next stage, the current source I2 produces photocurrent. To simulate different illumination levels, the current source I2 produces five photocurrent values, from 10 to 100 μA. This current is converted into voltage pulses with different amplitudes and a fixed duration texp, which are then applied to the memristor and cause current to flow in the circuit.
From Figure 6 it can be seen that a brighter illumination level corresponds to a higher current in the circuit with the memristor. This is then converted into voltage pulses with a coefficient Rf and is processed in the following layers. The maximum voltage (corresponding to the maximum current I2) does not change the memristor’s resistance due to the fact that Rf is calculated at the memristor threshold voltage.
The operation of the sensory part of the spiking ANN can be demonstrated using the following SPICE model (Figure 7). This model contains, in addition to the model in Figure 6, an integrator with a threshold U4 and an amplifier U1. An additional two switches have been added to the circuit—S7 for connecting the amplifier U1 with a high level setting on Vkpp, and S8 for supplying pulses in an unamplified form with a high-level setting on Vkwp. An additional amplifier U1 is needed to regulate the amplitude of the sensor part pulses in order to study their effect on the memristor resistance.
The graph in Figure 7 is plotted for the current in the memristor circuit at point TE. The left side of the graph is the same as in the case of a formal ANN. It shows the process of recording the weights in cases where the spiking ANN is obtained via transformation from a formal ANN. An example of the self-learning of this model is discussed below. The right part of the graph (Figure 7) is plotted for the mode of capturing video information via the sensor. From Figure 7 it can be seen that a brighter level of illumination corresponds to a higher pulse frequency (pink color), and vice versa—a lower level of illumination corresponds to a lower pulse frequency (green color).
To demonstrate the operation of the proposed concept of neuromorphic analog computer vision systems, an experimental set-up was made (Figure 8). It consists of an experimental chip (Figure 8A), which includes in particular two necessary types of memristive arrays: a 63 × 1 linear memristive array for investigating the sensory part, and a 32 × 8 1T1R memristive crossbar array for investigating the neural part of the system. Memristive devices were fabricated on the basis of a Au/Ta/ZrO2(Y)/Pt/Ti multilayer structure. Each cell of the crossbar array was equipped with an N-channel MOSFET transistor (Figure 9, right side). The chip was mounted into a standard metal–ceramic package and connected with a light-proof container covering the silicon photodiodes and light sources, which can be turned on using a PC.
Then, two test models of the ANN were created—a formal model and a spiking model. To train the ANNs, a dataset was generated that included small visual patterns of mathematical symbols (“+”, “–”, “/”, “×”, “=”) 3 × 3 pixels in size, where each pixel is specified by the photocurrent value—for bright pixels, the photocurrent was more than for dark ones (Figure 9, left side). A size of 3 × 3 pixels is a demo example that makes it possible both to map the ANN to a single 32 × 8 memristive crossbar array using 90 items (two memristors per synapse × nine inputs × five classes) from 256 devices for the recognition of visual images and to interact with memristive devices using our experimental set-up (Figure 8B).
To generate the dataset, the deviation of photocurrent values for each pixel was set within 30% margins from the mean value, measured using the experimental set-up (100 μA when the light source is turned on, and about 3 μA when it is turned off). The values of the photocurrent for the light and dark pixels were measured by turning on the light sources in different combinations, corresponding to the mathematical symbols. The dataset consisted of 800 training and 200 testing images. The mathematical model of a formal ANN consists of one layer with five neurons (Figure 9, central part) and nine inputs. As a result of supervised learning, each neuron must respond to its own pattern.
Memristive devices demonstrate bipolar switching of anionic type between the high resistance state and low resistance state. Both states are characterized by nonlinear current–voltage characteristics (Figure 8A, right panel). For the formal ANN, training was carried out on a mathematical model using the backpropagation method. The loss function is the mean square of errors. Accuracy was calculated as the ratio of correctly recognized patterns to the total number of patterns. Since the task of recognizing 3 × 3 pixel images is not difficult for an ANN, the training resulted in 100% accuracy even with a 30% deviation in the input data. The synapse weights obtained after training were then converted into memristor resistances using Formula (5) for Figure 4, circuit C:
R m 1 = R m 2 1 k · R m 2 ,   R m 2 = R H R S ,   f o r   w < 0 , R m 2 = R m 1 k · R m 1 + 1 ,   R m 1 = R L R S ,   f o r   w > 0 ,   k = w R f
The calculated resistance values were transferred to a computer model of a machine vision system with a deviation δ, thereby simulating the memristor programming error. The parameters δ were set as a percentage of the nominal resistance value; then, they were recalculated into standard deviation according to the three-sigma rule and fed into the pseudo-random number generation function according to the normal distribution law. After setting the weights, the operation of the network was emulated and the experiment was repeated 1000 times.
The computer model of the hardware implementation of the formal ANN, in turn, had 18 inputs, since it used a scheme for generating bipolar weights with a differential input (Figure 9). To create the neural part of the ANN, a crossbar model of 18 × 5 memristive devices was used, in which the signal from the sensory part is directly supplied to the rows of the crossbar, and the inference result is formed on the columns. The N-channel MOSFET transistor models were used for SPICE simulation. The response of the ANN to the presented visual patterns was fixed with a high voltage level on the corresponding column of the crossbar. After transferring this ANN model to a simulation model of a computer vision system, the accuracy of operation (inference) for five classes of mathematical signs was: 100% at δ = ±10%; 99.5% at δ = ±20%; and 99.1% at δ = ±30%. In this case, it is clear that the error in the memristor resistances in the ANN does not greatly reduce the recognition accuracy, which has been repeatedly demonstrated in other problems [29].
To test the operation of the spiking ANN (Figure 10), the computer model of the formal ANN was upgraded. The sensory part was replaced with circuits with an integrator (Figure 7), which act as presynaptic neurons that respond to light. Feedback from the outputs to the inputs containing an inverting amplifier and a switch was added to the neural part (Figure 10, left side). Feedback in this case is necessary in order to automatically adjust the synaptic weight setting. The simulation was performed in two stages—at the first stage, each presynaptic neuron was shown one pattern for a specified time of 200 ms. After this, the probability of recognition error was assessed by presenting other patterns to the presynaptic neuron and monitoring the spike frequency on the postsynaptic neuron.
The spike ANN training process was performed as follows. At the beginning, the weights were initialized to low values by feeding initialization pulses into the ANN. Then, when visual patterns were presented to the neurons, spikes began to be generated in them depending on the illumination level of each channel. Spikes change the resistance of odd memristors towards low resistance and even ones towards high resistance, since the inputs of the ANN are duplicated in accordance with Figure 4C. This is equivalent to the fact that the weights of the synapses increase. It turns out that for channels with a high spike frequency, the rate of such change is much higher than for channels with a low frequency, since the total charge of the signal passing through the memristor in the first case is greater. When the threshold charge value on the membrane is reached, the postsynaptic neuron generates a spike, which is amplified in the feedback circuit, inverted, and supplied to all inputs. This leads to the opposite process—the resistance of odd memristors changes towards high resistance, and even ones change towards low resistance, which is equivalent to a decrease in the weight of the synapse. Since the rate of weight increase in high-frequency channels is high, a short-term decrease does not greatly affect the final value; and vice versa—in low-frequency channels, the influence of the postsynaptic spike is strong and the weight of the synapse decreases (Figure 10, right side). The inference accuracy after the ANN’s self-tuning was, as in the previous case, 100%.

5. Discussion

As a result of our modeling, by using simple examples, it was demonstrated that the connection of the sensory part and neurons based on memristors without analog-to-digital and digital-to-analog conversions makes it possible to implement analog processing of visual information in hardware using formal and spiking ANN models. This architecture can be scaled to the size of modern matrices in photo and video recording devices and used as a hardware accelerator for ANN models currently used to work with images, and can also be used as a platform for the further development of this direction in accordance with the developed roadmap (Figure 11).
The operation of a formal ANN is demonstrated on one dense layer. Such layers can be combined together to form multilayer feed-forward ANNs. Since the main mathematical operation for most formal ANN architectures (convolutional ANNs, LSTMs, etc.) is matrix multiplication, the results of the demonstration can be extended to these—one part of the crossbars will perform the functions of memristor-based neurons, and the other part, for example, can perform the functions of core convolutions based on memristors [61]. From a mathematical point of view, this is equivalent.
To demonstrate the operation of a spiking ANN, only one special case of a self-learning option is proposed, inspired by article [58]. Currently, there are other approaches to train spiking ANNs [37,77,78], and in many of them, as in the example considered above, information is encoded according to spike frequency. This means that the developed concept can be further developed and improved by developing new designs of analog neuromorphic computer vision systems based on memristive devices.
In any case, the proposed architecture is at its initial stage of development and is associated with a large number of related tasks. One of the subjects of further research could be replacing the “photodiode–memristor” circuit with a single device—a memristor—that changes its resistance depending on the light intensity. Currently, such devices are being developed based on MIS (metal-insulator–semiconductor) structures, for example, based on ZrO2(Y) films with self-organized Ge nanoislands [79]. There are also tasks associated with the integration of the developed class of systems with the existing infrastructure, which are made possible with the improvement of component manufacturing technologies and the development of models and algorithms. As a result of the generalization and systematization of tasks, a road map for the development of this area was formed, presented in Figure 11.

6. Conclusions

A potential advantage of the discussed concept for creating a machine vision system relative to digital hardware implementation is that all visual information processing is performed at the hardware level. As a result, there is no need to execute program code that implements ANN models and algorithms, and there is no need to reuse the data bus and access the computer memory to load model parameters and write results. This architecture requires significantly fewer electronic components and consumes less power.
The advantage of the presented architecture relative to existing hardware implementations of ANNs based on memristors is the significant reduction in the number of DACs and ADCs used in the process of signal processing via arrays of memristive devices, as well as the possibility of a transition from formal ANN architectures, such as perceptrons and convolutional ANNs, to neuromorphic architectures, in which information is processed in a similar way as in BNNs.
The scope of application of the considered class of devices is associated with the creation of systems for which a low power consumption, small dimensions, and a high speed of operation when performing intellectual tasks (e.g. recognition, classification) are important. All on-board or wearable computing systems, such as mobile devices, unmanned vehicles and aircraft, ammunition equipment, etc., fall under these requirements. In addition, memristors are highly resistant to ionizing and defect-forming radiation, which allows them to be used in areas where radiation-resistant electronics are required.

Author Contributions

Conceptualization, S.S. and A.M.; funding acquisition, A.K. and A.M.; investigation, I.B., S.S. and E.G.; methodology, S.S. and E.G.; project administration, A.K. and A.M.; software, I.B.; hardware E.G. and A.M.; supervision, A.K. and A.M.; visualization, S.S. and I.B.; writing—original draft, S.S.; writing—review and editing, E.G., A.K. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research of neuromorphic systems was carried out within the state assignment in the field of scientific activity of the Ministry of Science and Higher Education of the Russian Federation (theme FZUN-2020-0013, state assignment VlSU). The study was carried out using equipment from the interregional multispecialty and interdisciplinary center for the collective usage of promising and competitive technologies in the areas of development and application in industry/mechanical engineering of domestic achievements in the field of nanotechnology (agreement no. 075-15-2021-692 of 5 August 2021). The memristive devices and integrated circuits were fabricated and studied at the facilities of a laboratory of memristor nanoelectronics (state assignment no. FSWR-2022-0009 for the creation of new laboratories for the electronics industry at the NNSU).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The kind support of Ivan Antonov, Alexey Belov, and Davud Guseinov during the work with memristive devices is acknowledged. The authors are also grateful to Leonid Korolev and Sergey Danilin for their help with the experimental set-up.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  2. Mikhaylov, A.N.; Gryaznov, E.G.; Koryazhkina, M.N.; Bordanov, I.A.; Shchanikov, S.A.; Telminov, O.A.; Kazantsev, V.B. Neuromorphic Computing Based on CMOS-Integrated Memristive Arrays: Current State and Perspectives. Supercomput. Front. Innov. 2023, 10, 77–103. [Google Scholar] [CrossRef]
  3. Amirsoleimani, A.; Alibart, F.; Yon, V.; Xu, J.; Pazhouhandeh, M.; Ecoffey, S.; Beilliard, Y.; Genov, R.; Drouin, D. In-Memory Vector-Matrix Multiplication in Monolithic Complementary Metal–Oxide–Semiconductor-Memristor Integrated Circuits: Design Choices, Challenges, and Perspectives. Adv. Intell. Syst. 2020, 2, 2000115. [Google Scholar] [CrossRef]
  4. Chua, L. Memristor-The Missing Circuit Element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  5. Chua, L.O.; Kang, S.M. Memristive Devices and Systems. Proc. IEEE 1976, 64, 209–223. [Google Scholar] [CrossRef]
  6. Strukov, D.B.; Snider, G.S.; Stewart, D.R.; Williams, R.S. The Missing Memristor Found. Nature 2008, 453, 80–83. [Google Scholar] [CrossRef] [PubMed]
  7. Shen, Z.; Zhao, C.; Qi, Y.; Xu, W.; Liu, Y.; Mitrovic, I.Z.; Yang, L.; Zhao, C. Advances of RRAM Devices: Resistive Switching Mechanisms, Materials and Bionic Synaptic Application. Nanomaterials 2020, 10, 1437. [Google Scholar] [CrossRef]
  8. Li, C.; Graves, C.E.; Sheng, X.; Miller, D.; Foltin, M.; Pedretti, G.; Strachan, J.P. Analog Content-Addressable Memories with Memristors. Nat. Commun. 2020, 11, 1638. [Google Scholar] [CrossRef]
  9. Ielmini, D.; Pedretti, G. Device and Circuit Architectures for In-Memory Computing. Adv. Intell. Syst. 2020, 2, 2000040. [Google Scholar] [CrossRef]
  10. Mehonic, A.; Sebastian, A.; Rajendran, B.; Simeone, O.; Vasilaki, E.; Kenyon, A. Memristors—From In-Memory Computing, Deep Learning Acceleration, and Spiking Neural Networks to the Future of Neuromorphic and Bio-Inspired Computing. Adv. Intell. Syst. 2020, 2, 2000085. [Google Scholar] [CrossRef]
  11. Matsukatova, A.; Prudnikov, N.; Kulagin, V.; Battistoni, S.; Minnekhanov, A.; Trofimov, A.; Nesmelov, A.; Zavyalov, S.; Malakhova, Y.; Parmeggiani, M.; et al. Combination of Organic-Based Reservoir Computing and Spiking Neuromorphic Systems for a Robust and Efficient Pattern Classification. Adv. Intell. Syst. 2023, 5, 2200407. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Wang, Z.; Zhu, J.; Yang, Y.; Rao, M.; Song, W.; Zhuo, Y.; Zhang, X.; Cui, M.; Shen, L.; et al. Brain-Inspired Computing with Memristors: Challenges in Devices, Circuits, and Systems. Appl. Phys. Rev. 2020, 7, 011308. [Google Scholar] [CrossRef]
  13. Bayat, F.M.; Prezioso, M.; Chakrabarti, B.; Nili, H.; Kataeva, I.; Strukov, D. Implementation of Multilayer Perceptron Network with Highly Uniform Passive Memristive Crossbar Circuits. Nat. Commun. 2018, 9, 2331. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, Q.; Wang, X.; Lee, S.H.; Meng, F.-H.; Lu, W.D. A Deep Neural Network Accelerator Based on Tiled RRAM Architecture. In Proceedings of the 2019 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 7–11 December 2019; pp. 14.4.1–14.4.4. [Google Scholar]
  15. Moon, J.; Ma, W.; Shin, J.H.; Cai, F.; Du, C.; Lee, S.H.; Lu, W.D. Temporal Data Classification and Forecasting Using a Memristor-Based Reservoir Computing System. Nat. Electron. 2019, 2, 480–487. [Google Scholar] [CrossRef]
  16. Yao, P.; Wu, H.; Gao, B.; Tang, J.; Zhang, Q.; Zhang, W.; Yang, J.J.; Qian, H. Fully Hardware-Implemented Memristor Convolutional Neural Network. Nature 2020, 577, 641–646. [Google Scholar] [CrossRef] [PubMed]
  17. Li, X.; Tang, J.; Zhang, Q.; Gao, B.; Yang, J.J.; Song, S.; Wu, W.; Zhang, W.; Yao, P.; Deng, N.; et al. Power-Efficient Neural Network with Artificial Dendrites. Nat. Nanotechnol. 2020, 15, 776–782. [Google Scholar] [CrossRef] [PubMed]
  18. Baek, S.; Eshraghian, J.K.; Thio, W.; Sandamirskaya, Y.; Iu, H.H.C.; Lu, W.D. A Real-Time Retinomorphic Simulator Using a Conductance-Based Discrete Neuronal Network. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August–2 September 2020; pp. 79–83. [Google Scholar]
  19. Baek, S.; Eshraghian, J.K.; Thio, W.; Sandamirskaya, Y.; Iu, H.H.C.; Lu, W.D. Live Demonstration: Video-to-Spike Conversion Using a Real-Time Retina Cell Network Simulator. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August–2 September 2020; p. 131. [Google Scholar]
  20. Zhou, Y.; Wu, H.; Gao, B.; Wu, W.; Xi, Y.; Yao, P.; Zhang, S.; Zhang, Q.; Qian, H. Associative Memory for Image Recovery with a High-Performance Memristor Array. Adv. Funct. Mater. 2019, 29, 1900155. [Google Scholar] [CrossRef]
  21. Lin, P.; Li, C.; Wang, Z.; Li, Y.; Jiang, H.; Song, W.; Rao, M.; Zhuo, Y.; Upadhyay, N.; Barnell, M.; et al. Three-Dimensional Memristor Circuits as Complex Neural Networks. Nat. Electron. 2020, 3, 225–232. [Google Scholar] [CrossRef]
  22. Li, C.; Hu, M.; Li, Y.; Jiang, H.; Ge, N.; Montgomery, E.; Zhang, J.; Song, W.; Dávila, N.; Graves, C.E.; et al. Analogue Signal and Image Processing with Large Memristor Crossbars. Nat. Electron. 2018, 1, 52–59. [Google Scholar] [CrossRef]
  23. Li, C.; Belkin, D.; Li, Y.; Yan, P.; Hu, M.; Ge, N.; Jiang, H.; Montgomery, E.; Lin, P.; Wang, Z.; et al. Efficient and Self-Adaptive in-Situ Learning in Multilayer Memristor Neural Networks. Nat. Commun. 2018, 9, 2385. [Google Scholar] [CrossRef]
  24. Qin, Y.; Bao, H.; Wang, F.; Chen, J.; Li, Y.; Miao, X.S. Recent Progress on Memristive Convolutional Neural Networks for Edge Intelligence. Adv. Intell. Syst. 2020, 2, 2000114. [Google Scholar] [CrossRef]
  25. Mikhaylov, A.N.; Shchanikov, S.A.; Demin, V.A.; Makarov, V.A.; Kazantsev, V.B. Neuroelectronics: Towards Symbiosis of Neuronal Systems and Emerging Electronics. Front. Neurosci. 2023, 17, 1227798. [Google Scholar] [CrossRef] [PubMed]
  26. Mikhaylov, A.; Pimashkin, A.; Pigareva, Y.; Gerasimova, S.; Gryaznov, E.; Shchanikov, S.; Zuev, A.; Talanov, M.; Lavrov, I.; Demin, V.; et al. Neurohybrid Memristive CMOS-Integrated Systems for Biosensors and Neuroprosthetics. Front. Neurosci. 2020, 14, 358. [Google Scholar] [CrossRef] [PubMed]
  27. Lee, S.H.; Zhu, X.; Lu, W.D. Nanoscale Resistive Switching Devices for Memory and Computing Applications. Nano Res. 2020, 13, 1228–1243. [Google Scholar] [CrossRef]
  28. Xia, Q.; Yang, J.J. Memristive Crossbar Arrays for Brain-Inspired Computing. Nat. Mater. 2019, 18, 309–323. [Google Scholar] [CrossRef] [PubMed]
  29. Wan, W.; Kubendran, R.; Schaefer, C.; Eryilmaz, S.B.; Zhang, W.; Wu, D.; Deiss, S.; Raina, P.; Qian, H.; Gao, B.; et al. A Compute-in-Memory Chip Based on Resistive Random-Access Memory. Nature 2022, 608, 504–512. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, W.-H.; Dou, C.; Li, K.-X.; Lin, W.-Y.; Li, P.-Y.; Huang, J.-H.; Wang, J.-H.; Wei, W.-C.; Xue, C.-X.; Chiu, Y.-C.; et al. CMOS-Integrated Memristive Non-Volatile Computing-in-Memory for AI Edge Processors. Nat. Electron. 2019, 2, 420–428. [Google Scholar] [CrossRef]
  31. Xue, C.-X.; Chiu, Y.-C.; Liu, T.-W.; Huang, T.-Y.; Liu, J.-S.; Chang, T.-W.; Kao, H.-Y.; Wang, J.-H.; Wei, S.-Y.; Lee, C.-Y.; et al. A CMOS-Integrated Compute-in-Memory Macro Based on Resistive Random-Access Memory for AI Edge Devices. Nat. Electron. 2021, 4, 81–90. [Google Scholar] [CrossRef]
  32. Im, I.H.; Kim, S.J.; Jang, H.W. Memristive Devices for New Computing Paradigms. Adv. Intell. Syst. 2020, 2, 2000105. [Google Scholar] [CrossRef]
  33. Analog Content Addressable Memory (CAM) Employing Analog Nonvolatile Storage. PubChem. Patent US-6985372-B1, 17 April 2003. Available online: https://pubchem.ncbi.nlm.nih.gov/patent/US-6985372-B1 (accessed on 4 December 2023).
  34. Blyth, T.; Khan, S.; Simko, R. A Non-Volatile Analog Storage Device Using EEPROM Technology. In Proceedings of the 1991 IEEE International Solid-State Circuits Conference. Digest of Technical Papers, San Francisco, CA, USA, 13–15 February 1991; pp. 192–315. [Google Scholar]
  35. Klass, P.J. Fiber Optic Device Recognizes Signals Fiber Optic Device Recognizes Signals. Aviat. Week Space Technol. 1962, 77, 94–101. [Google Scholar]
  36. Widrow, B. An Adaptive “ADALINE” Neuron Using Chemical “Memistor”. Tech. Rep. 1960, 1553. [Google Scholar]
  37. Makarov, V.A.; Lobov, S.A.; Shchanikov, S.; Mikhaylov, A.; Kazantsev, V.B. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front. Comput. Neurosci. 2022, 16, 859874. [Google Scholar] [CrossRef] [PubMed]
  38. Kyuma, K.; Lange, E.; Ohta, J.; Hermanns, A.; Banish, B.; Oita, M. Artificial Retinas—Fast, Versatile Image Processors. Nature 1994, 372, 197. [Google Scholar] [CrossRef]
  39. Mehonic, A.; Gerard, T.; Kenyon, A.J. Light-Activated Resistance Switching in SiOx RRAM Devices. Appl. Phys. Lett. 2017, 111, 233502. [Google Scholar] [CrossRef]
  40. Jang, H.; Liu, C.; Hinton, H.; Lee, M.; Kim, H.; Seol, M.; Shin, H.; Park, S.; Ham, D. An Atomically Thin Optoelectronic Machine Vision Processor. Adv. Mater. 2020, 32, 2002431. [Google Scholar] [CrossRef] [PubMed]
  41. Hu, W.; Xiao, F.; Li, T.; Cai, B.; Panin, G.; Wang, J.; Jiang, X.; Xu, H.; Dong, Y.; Song, B.; et al. 2D Materials-Based Photo-Memristors with Tunable Non-Volatile Responsivities for Neuromorphic Vision Processing. Research Square 2022. [Google Scholar] [CrossRef]
  42. Samyshkin, V.; Lelekova, A.; Osipov, A.; Bukharov, D.; Skryabin, I.; Arakelian, S.; Kucherik, A.; Kutrovskaya, S. Photosensitive Free-Standing Ultra-Thin Carbyne–Gold Films. Opt. Quant. Electron. 2019, 51, 394. [Google Scholar] [CrossRef]
  43. Vasileiadis, N.; Ntinas, V.; Sirakoulis, G.C.; Dimitrakis, P. In-Memory-Computing Realization with a Photodiode/Memristor Based Vision Sensor. Materials 2021, 14, 5223. [Google Scholar] [CrossRef]
  44. Chen, S.; Lou, Z.; Chen, D.; Shen, G. An Artificial Flexible Visual Memory System Based on an UV-Motivated Memristor. Adv. Mater. 2018, 30, 1705400. [Google Scholar] [CrossRef]
  45. Eshraghian, J.K.; Cho, K.; Zheng, C.; Nam, M.; Iu, H.H.-C.; Lei, W.; Eshraghian, K. Neuromorphic Vision Hybrid RRAM-CMOS Architecture. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2018, 26, 2816–2829. [Google Scholar] [CrossRef]
  46. Choi, C.; Kim, H.; Kang, J.-H.; Song, M.-K.; Yeon, H.; Chang, C.S.; Suh, J.M.; Shin, J.; Lu, K.; Park, B.-I.; et al. Reconfigurable Heterogeneous Integration Using Stackable Chips with Embedded Artificial Intelligence. Nat. Electron. 2022, 5, 386–393. [Google Scholar] [CrossRef]
  47. Galushkin, A.I. Neural Networks Theory; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2007; ISBN 978-3-540-48124-9. [Google Scholar]
  48. Nicholls, J.G.; Martin, A.R.; Wallace, B.G.; Fuchs, P.A. From Neuron to Brain; Sinauer Associates: Sunderland, MA, USA, 2001; ISBN 978-0-87893-439-3. [Google Scholar]
  49. Zhang, W.; Gao, B.; Tang, J.; Yao, P.; Yu, S.; Chang, M.-F.; Yoo, H.-J.; Qian, H.; Wu, H. Neuro-Inspired Computing Chips. Nat. Electron. 2020, 3, 371–382. [Google Scholar] [CrossRef]
  50. Long, L.; Fang, G. A Review of Biologically Plausible Neuron Models for Spiking Neural Networks. AIAA Infotech Aerosp. 2010, 2010, 3540. [Google Scholar] [CrossRef]
  51. Hodgkin, A.L.; Huxley, A.F. A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  52. Fitzhugh, R. Impulses and Physiological States in Theoretical Models of Nerve Membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef] [PubMed]
  53. Izhikevich, E.M. Simple Model of Spiking Neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef] [PubMed]
  54. Segee, B. Methods in Neuronal Modeling: From Ions to Networks, 2nd Edition. Comput. Sci. Eng. 1999, 1, 81. [Google Scholar] [CrossRef]
  55. Bower, J.; Beeman, D. The Book of GENESIS—Exploring Realistic Neural Models with the GEneral NEural SImulation System, 2nd ed.; Springer: New York, NY, USA, 1994. [Google Scholar]
  56. Abbott, L.F. Lapicque’s Introduction of the Integrate-and-Fire Model Neuron (1907). Brain Res. Bull. 1999, 50, 303–304. [Google Scholar] [CrossRef]
  57. Li, Y.; Su, K.; Chen, H.; Zou, X.; Wang, C.; Man, H.; Liu, K.; Xi, X.; Li, T. Research Progress of Neural Synapses Based on Memristors. Electronics 2023, 12, 3298. [Google Scholar] [CrossRef]
  58. Surazhevsky, I.A.; Demin, V.A.; Ilyasov, A.I.; Emelyanov, A.V.; Nikiruy, K.E.; Rylkov, V.V.; Shchanikov, S.A.; Bordanov, I.A.; Gerasimova, S.A.; Guseinov, D.V.; et al. Noise-Assisted Persistence and Recovery of Memory State in a Memristive Spiking Neuromorphic Network. Chaos Solitons Fractals 2021, 146, 110890. [Google Scholar] [CrossRef]
  59. Huang, J.; Serb, A.; Stathopoulos, S.; Prodromakis, T. Text Classification in Memristor-Based Spiking Neural Networks. Neuromorphic Comput. Eng. 2023, 3, 014003. [Google Scholar] [CrossRef]
  60. Guo, Y.; Wu, H.; Gao, B.; Qian, H. Unsupervised Learning on Resistive Memory Array Based Spiking Neural Networks. Front. Neurosci. 2019, 13, 812. [Google Scholar] [CrossRef] [PubMed]
  61. Spike-Timing-Dependent Plasticity Learning of Coincidence Detection with Passively Integrated Memristive Circuits|Nature Communications. Available online: https://www.nature.com/articles/s41467-018-07757-y?error=cookies_not_supported&code=8f2e1aae-fa62-4a13-a91c-3a833ef268e4 (accessed on 4 December 2023).
  62. Milo, V.; Pedretti, G.; Carboni, R.; Calderoni, A.; Ramaswamy, N.; Ambrogio, S.; Ielmini, D. Demonstration of Hybrid CMOS/RRAM Neural Networks with Spike Time/Rate-Dependent Plasticity. In Proceedings of the 2016 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 3–7 December 2016; pp. 16.8.1–16.8.4. [Google Scholar]
  63. Cheng, R.; Goteti, U.S.; Hamilton, M. Spiking Neuron Circuits Using Superconducting Quantum Phase-Slip Junctions. J. Appl. Phys. 2018, 124, 152126. [Google Scholar] [CrossRef]
  64. Wu, X.; Saxena, V.; Zhu, K. A CMOS Spiking Neuron for Dense Memristor-Synapse Connectivity for Brain-Inspired Computing. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; pp. 1–6. [Google Scholar]
  65. Nowshin, F.; Yi, Y. Memristor-Based Deep Spiking Neural Network with a Computing-In-Memory Architecture. In Proceedings of the 2022 23rd International Symposium on Quality Electronic Design (ISQED), Santa Clara, CA, USA, 6–7 April 2022; pp. 1–6. [Google Scholar]
  66. Qiu, R.; Dong, Y.; Jiang, X.; Wang, G. Two-Neuron Based Memristive Hopfield Neural Network with Synaptic Crosstalk. Electronics 2022, 11, 3034. [Google Scholar] [CrossRef]
  67. Sun, Z.; Kvatinsky, S.; Si, X.; Mehonic, A.; Cai, Y.; Huang, R. A Full Spectrum of Computing-in-Memory Technologies. Nat. Electron. 2023, 6, 823–835. [Google Scholar] [CrossRef]
  68. Vlasov, D.; Minnekhanov, A.; Rybka, R.; Davydov, Y.; Sboev, A.; Serenko, A.; Ilyasov, A.; Demin, V. Memristor-Based Spiking Neural Network with Online Reinforcement Learning. Neural Netw. 2023, 166, 512–523. [Google Scholar] [CrossRef] [PubMed]
  69. Ismail, M.; Rasheed, M.; Mahata, C.; Kang, M.; Kim, S. Mimicking Biological Synapses with A-HfSiOx-Based Memristor: Implications for Artificial Intelligence and Memory Applications. Nano Converg. 2023, 10, 33. [Google Scholar] [CrossRef]
  70. Shchanikov, S.; Zuev, A.; Bordanov, I.; Danilin, S.; Lukoyanov, V.; Korolev, D.; Belov, A.; Pigareva, Y.; Gladkov, A.; Pimashkin, A.; et al. Designing a Bidirectional, Adaptive Neural Interface Incorporating Machine Learning Capabilities and Memristor-Enhanced Hardware. Chaos Solitons Fractals 2021, 142, 110504. [Google Scholar] [CrossRef]
  71. Yakopcic, C.; Alom, M.Z.; Taha, T.M. Extremely Parallel Memristor Crossbar Architecture for Convolutional Neural Network Implementation. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1696–1703. [Google Scholar]
  72. Li, C.; Wang, Z.; Rao, M.; Belkin, D.; Song, W.; Jiang, H.; Yan, P.; Li, Y.; Lin, P.; Hu, M.; et al. Long Short-Term Memory Networks in Memristor Crossbar Arrays. Nat. Mach. Intell. 2019, 1, 49–57. [Google Scholar] [CrossRef]
  73. Zhevnenko, D.; Meshchaninov, F.; Kozhevnikov, V.; Shamin, E.; Belov, A.; Gerasimova, S.; Guseinov, D.; Mikhaylov, A.; Gornev, E. Simulation of Memristor Switching Time Series in Response to Spike-like Signal. Chaos Solitons Fractals 2021, 142, 110382. [Google Scholar] [CrossRef]
  74. Nikiruy, K.E.; Emelyanov, A.V.; Demin, V.A.; Sitnikov, A.V.; Minnekhanov, A.A.; Rylkov, V.V.; Kashkarov, P.K.; Kovalchuk, M.V. Dopamine-like STDP Modulation in Nanocomposite Memristors. AIP Adv. 2019, 9, 065116. [Google Scholar] [CrossRef]
  75. Adhikari, S.P.; Yang, C.; Kim, H.; Chua, L.O. Memristor Bridge Synapse-Based Neural Network and Its Learning. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1426–1435. [Google Scholar] [CrossRef] [PubMed]
  76. Biolek, Z.; Biolek, D.; Biolkova, V. SPICE Model of Memristor with Nonlinear Dopant Drift. Radioengineering 2009, 18, 211–244. [Google Scholar]
  77. Stepasyuk, V.Y.; Makarov, V.A.; Lobov, S.A.; Kazantsev, V.B. Synaptic Scaling as an Essential Component of Hebbian Learning. In Proceedings of the 2022 6th Scientific School Dynamics of Complex Networks and their Applications (DCNA), Kaliningrad, Russia, 14–16 September 2022; pp. 270–273. [Google Scholar]
  78. Stasenko, S.; Mikhaylov, A.; Kazantsev, V. Control of Network Bursting in a Model Spiking Network Supplied with Memristor—Implemented Plasticity. Mathematics 2023, 11, 3888. [Google Scholar] [CrossRef]
  79. Tikhov, S.V.; Gorshkov, O.N.; Koryazhkina, M.N.; Antonov, I.N.; Kasatkin, A.P. Light-Induced Resistive Switching in Silicon-Based Metal–Insulator–Semiconductor Structures. Tech. Phys. Lett. 2016, 42, 536–538. [Google Scholar] [CrossRef]
Figure 1. The main features of the neuromorphic analog machine vision systems, combining “in-memory computing”, “in-sensor-computing”, and neuromorphic architectures. (A) In comparison with the existing digital computer vision systems, “in-memory computing” makes it possible to process visual information entirely within the hardware when the ANN models work completely in analog form using computers based on memristive devices. (B) In comparison with the existing general memristor-based computers, in the neuromorphic analog machine vision systems, there are no analog-to-digital and digital-to-analog conversions in the process of capturing visual information via photosensors. For these purposes, the devices for “in-sensor computation” can be used in the sensory part of a system for capturing visual information in analog form, which is then fed to an ANN based on memristors.
Figure 1. The main features of the neuromorphic analog machine vision systems, combining “in-memory computing”, “in-sensor-computing”, and neuromorphic architectures. (A) In comparison with the existing digital computer vision systems, “in-memory computing” makes it possible to process visual information entirely within the hardware when the ANN models work completely in analog form using computers based on memristive devices. (B) In comparison with the existing general memristor-based computers, in the neuromorphic analog machine vision systems, there are no analog-to-digital and digital-to-analog conversions in the process of capturing visual information via photosensors. For these purposes, the devices for “in-sensor computation” can be used in the sensory part of a system for capturing visual information in analog form, which is then fed to an ANN based on memristors.
Applsci 13 13309 g001
Figure 2. Spiking ANNs based on memristive devices. (A) Common architecture includes presynaptic neurons and postsynaptic neurons, connected by artificial synapses implemented with memristors. Presynaptic neurons generate spikes encoding input information. Spikes go through the memristive synapses and locally change their resistances in accordance with the STDP (spike timing-dependent plasticity) rule, providing the self-learning of the whole ANN. (B) The dependence of the change ΔW in synaptic conductance on the interval Δt between a presynaptic spike and a postsynaptic spike for different current synaptic conductance values W. (C) A spiking neuron receives sequences of spikes on its inputs and, under certain conditions, generates a spike at its output; for example, in the LIF (leaky integrate-and-fire) model, each spike contributes to the neuron’s status—its amplitude, which decays over time; if a sufficient number of spikes contributes to the status in a certain time window, the neuron’s amplitude exceeds a threshold, and the neuron generates an output spike [2].
Figure 2. Spiking ANNs based on memristive devices. (A) Common architecture includes presynaptic neurons and postsynaptic neurons, connected by artificial synapses implemented with memristors. Presynaptic neurons generate spikes encoding input information. Spikes go through the memristive synapses and locally change their resistances in accordance with the STDP (spike timing-dependent plasticity) rule, providing the self-learning of the whole ANN. (B) The dependence of the change ΔW in synaptic conductance on the interval Δt between a presynaptic spike and a postsynaptic spike for different current synaptic conductance values W. (C) A spiking neuron receives sequences of spikes on its inputs and, under certain conditions, generates a spike at its output; for example, in the LIF (leaky integrate-and-fire) model, each spike contributes to the neuron’s status—its amplitude, which decays over time; if a sufficient number of spikes contributes to the status in a certain time window, the neuron’s amplitude exceeds a threshold, and the neuron generates an output spike [2].
Applsci 13 13309 g002
Figure 3. The electrical circuits of the sensory part (input channels) for the capturing and encoding of visual information in analog form and its subsequent transmission to an ANN without digitization. (A) An input channel for the formal ANNs. It consists of a photodiode PD, a load resistor Rload (for converting photocurrent iph to voltage), an operational amplifier U1, and a bias voltage source Vbias (reverse biasing is used in the photoconductive mode providing a wider bandwidth, higher sensitivity, and improved linearity (for sensitivity control at different weather conditions, day time, etc.), but also increases noise and dark current). The SPDT and SPST switches control the operating modes of the circuit: the “write mode”, when recording the ANN weights (by changing the memristor’s resistance from Rinit to the target RT), and the “inference mode”. Visual information in this case is encoded using the voltage amplitude Vi in each input channel. (B) An input channel for the spiking ANN. It consists of the same elements as the input channel for the formal ANN, except for the integrator with threshold U2. The integrator accumulates charge and generates pulses of the same amplitude but at different frequencies, like an integrate-and-fire neuron (I&F). The multiplexer (MUX) U3 and the SPDT switch control the operating modes of the circuit: for recording ANN weights, self-learning (by changing the memristor’s resistance from Rinit to Rinf), and inference. Visual information is encoded according to signal frequency fi.
Figure 3. The electrical circuits of the sensory part (input channels) for the capturing and encoding of visual information in analog form and its subsequent transmission to an ANN without digitization. (A) An input channel for the formal ANNs. It consists of a photodiode PD, a load resistor Rload (for converting photocurrent iph to voltage), an operational amplifier U1, and a bias voltage source Vbias (reverse biasing is used in the photoconductive mode providing a wider bandwidth, higher sensitivity, and improved linearity (for sensitivity control at different weather conditions, day time, etc.), but also increases noise and dark current). The SPDT and SPST switches control the operating modes of the circuit: the “write mode”, when recording the ANN weights (by changing the memristor’s resistance from Rinit to the target RT), and the “inference mode”. Visual information in this case is encoded using the voltage amplitude Vi in each input channel. (B) An input channel for the spiking ANN. It consists of the same elements as the input channel for the formal ANN, except for the integrator with threshold U2. The integrator accumulates charge and generates pulses of the same amplitude but at different frequencies, like an integrate-and-fire neuron (I&F). The multiplexer (MUX) U3 and the SPDT switch control the operating modes of the circuit: for recording ANN weights, self-learning (by changing the memristor’s resistance from Rinit to Rinf), and inference. Visual information is encoded according to signal frequency fi.
Applsci 13 13309 g003
Figure 4. The most common options for implementing a memristor-based ANN synapse for the neuromorphic analog computer vision systems. (A) The unipolar weight is formed with one memristor and can be mathematically calculated in different ways: for Equations (1) and (2), the weight plot is a hyperbola, and for Equation (3), it is a straight line from 0 to 1. (B) The bipolar weight is formed by a pair of memristors. The sign is obtained due to the differential output. (C) A similar circuit, but where the sign is obtained due to the differential input. One input x corresponds to two inputs Vin and −Vin. The weight plot for Equations (4) and (5) is a hyperbola. (D) The bipolar weight is formed by the memristor bridge proposed in [75]. The weight plot for Equation (6) is a straight line ranging from 0 to 1. The linear relationship between resistance and weight makes mathematical calculations easier, but requires a much larger number of memristors, which leads to additional resource costs.
Figure 4. The most common options for implementing a memristor-based ANN synapse for the neuromorphic analog computer vision systems. (A) The unipolar weight is formed with one memristor and can be mathematically calculated in different ways: for Equations (1) and (2), the weight plot is a hyperbola, and for Equation (3), it is a straight line from 0 to 1. (B) The bipolar weight is formed by a pair of memristors. The sign is obtained due to the differential output. (C) A similar circuit, but where the sign is obtained due to the differential input. One input x corresponds to two inputs Vin and −Vin. The weight plot for Equations (4) and (5) is a hyperbola. (D) The bipolar weight is formed by the memristor bridge proposed in [75]. The weight plot for Equation (6) is a straight line ranging from 0 to 1. The linear relationship between resistance and weight makes mathematical calculations easier, but requires a much larger number of memristors, which leads to additional resource costs.
Applsci 13 13309 g004
Figure 5. Architectures of the neuromorphic analog machine vision systems based on memristive devices. (A) For formal ANNs, weights are mapped to a crossbar array of memristive devices in cases where the bipolar weights are obtained due to a differential input. The input sensory circuits are connected directly to the inputs of neurons without digitalization. Visual information is encoded via the voltage amplitude, depending on illumination. (B) For spiking ANNs, presynaptic neurons generate spikes, the frequency of which depends on illumination. The value of the synaptic weights changes during the learning process. A postsynaptic neuron generates spikes when the charge on the membrane exceeds a threshold “Th”. So, the entire analog machine vision system is a spiking ANN without analog-to-digital and digital-to-analog converters.
Figure 5. Architectures of the neuromorphic analog machine vision systems based on memristive devices. (A) For formal ANNs, weights are mapped to a crossbar array of memristive devices in cases where the bipolar weights are obtained due to a differential input. The input sensory circuits are connected directly to the inputs of neurons without digitalization. Visual information is encoded via the voltage amplitude, depending on illumination. (B) For spiking ANNs, presynaptic neurons generate spikes, the frequency of which depends on illumination. The value of the synaptic weights changes during the learning process. A postsynaptic neuron generates spikes when the charge on the membrane exceeds a threshold “Th”. So, the entire analog machine vision system is a spiking ANN without analog-to-digital and digital-to-analog converters.
Applsci 13 13309 g005
Figure 6. The SPICE model of a formal ANN circuit and an illustration of its operation. The initialization and programming signal generators, as well as the equivalent photodiode circuit, are on the left side. The memristor model and the OpAmp-based current-to-voltage convertor are on the right side. The current plot is shown for the TE point of the memristor. The plot shows two stages of the operation of the sensory part: the “write mode”—for recording the weight of an ANN synapse, and the “inference mode”—for scalar multiplication of the signal proportional to illumination according to the weight of an ANN synapse. Different colors correspond to different illumination levels or photocurrents—from 10 (pink) to 100 μA (green).
Figure 6. The SPICE model of a formal ANN circuit and an illustration of its operation. The initialization and programming signal generators, as well as the equivalent photodiode circuit, are on the left side. The memristor model and the OpAmp-based current-to-voltage convertor are on the right side. The current plot is shown for the TE point of the memristor. The plot shows two stages of the operation of the sensory part: the “write mode”—for recording the weight of an ANN synapse, and the “inference mode”—for scalar multiplication of the signal proportional to illumination according to the weight of an ANN synapse. Different colors correspond to different illumination levels or photocurrents—from 10 (pink) to 100 μA (green).
Applsci 13 13309 g006
Figure 7. The SPICE model of a spiking ANN circuit and the illustration of its operation. In comparison with the previous model (described in Figure 6), an integrator with a threshold U4 has been added, which converts illumination into pulse frequency. An additional amplifier U1 and two additional switches S1 and S7 are used to amplify the pulses if the signal is used to potentiate or depress the weights of the first layer of a spiking ANN. The current plot shows that the frequency at which the pulses are fed into the memristor at the TE point is various—with a high frequency (pink color) for intensive illumination, and vice versa (green color).
Figure 7. The SPICE model of a spiking ANN circuit and the illustration of its operation. In comparison with the previous model (described in Figure 6), an integrator with a threshold U4 has been added, which converts illumination into pulse frequency. An additional amplifier U1 and two additional switches S1 and S7 are used to amplify the pulses if the signal is used to potentiate or depress the weights of the first layer of a spiking ANN. The current plot shows that the frequency at which the pulses are fed into the memristor at the TE point is various—with a high frequency (pink color) for intensive illumination, and vice versa (green color).
Applsci 13 13309 g007
Figure 8. Experimental set-up for measuring data for simulation. (A) The chip consists of a linear 63 × 1 memristive array for the sensory part and a 32 × 8 memristive crossbar array for the ANN neurons. IV-curves of the memristive devices can be seen on the right. (B) Several parts are connected in the set-up: 1—PC; 2, 3, 6—PCB with the necessary electronic devices; 4—memristive chip; 5—light-proof container covering the light sources (LEDs) and the photodiodes (PDs).
Figure 8. Experimental set-up for measuring data for simulation. (A) The chip consists of a linear 63 × 1 memristive array for the sensory part and a 32 × 8 memristive crossbar array for the ANN neurons. IV-curves of the memristive devices can be seen on the right. (B) Several parts are connected in the set-up: 1—PC; 2, 3, 6—PCB with the necessary electronic devices; 4—memristive chip; 5—light-proof container covering the light sources (LEDs) and the photodiodes (PDs).
Applsci 13 13309 g008
Figure 9. Computer modeling of a machine vision system with a formal ANN model. This figure shows examples of visual patterns that represent mathematical symbols. These images are fed to the input of a single-layer formal ANN. Each pixel of each image corresponds to a different signal amplitude. It can be seen that after training and transferring the synapse weights to the memristor resistances in the crossbar, each ANN neuron responds to the corresponding class of the visual pattern.
Figure 9. Computer modeling of a machine vision system with a formal ANN model. This figure shows examples of visual patterns that represent mathematical symbols. These images are fed to the input of a single-layer formal ANN. Each pixel of each image corresponds to a different signal amplitude. It can be seen that after training and transferring the synapse weights to the memristor resistances in the crossbar, each ANN neuron responds to the corresponding class of the visual pattern.
Applsci 13 13309 g009
Figure 10. Computer modeling of a computer vision system with a spiking model of an ANN. The architecture of this ANN is the same as for the formal ANN; however, this design contains integrators with a threshold and feedback from outputs to inputs. Feedbacks are active only during ANN training. During the training process of the ANN, the weights to which the high-frequency signal is applied tend to increase, and vice versa. Thus, ANN neurons learn to recognize visual patterns.
Figure 10. Computer modeling of a computer vision system with a spiking model of an ANN. The architecture of this ANN is the same as for the formal ANN; however, this design contains integrators with a threshold and feedback from outputs to inputs. Feedbacks are active only during ANN training. During the training process of the ANN, the weights to which the high-frequency signal is applied tend to increase, and vice versa. Thus, ANN neurons learn to recognize visual patterns.
Applsci 13 13309 g010
Figure 11. Roadmap for further development in this area (DVS—dynamic vision sensor; IR—infrared).
Figure 11. Roadmap for further development in this area (DVS—dynamic vision sensor; IR—infrared).
Applsci 13 13309 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shchanikov, S.; Bordanov, I.; Kucherik, A.; Gryaznov, E.; Mikhaylov, A. Neuromorphic Analog Machine Vision Enabled by Nanoelectronic Memristive Devices. Appl. Sci. 2023, 13, 13309. https://doi.org/10.3390/app132413309

AMA Style

Shchanikov S, Bordanov I, Kucherik A, Gryaznov E, Mikhaylov A. Neuromorphic Analog Machine Vision Enabled by Nanoelectronic Memristive Devices. Applied Sciences. 2023; 13(24):13309. https://doi.org/10.3390/app132413309

Chicago/Turabian Style

Shchanikov, Sergey, Ilya Bordanov, Alexey Kucherik, Evgeny Gryaznov, and Alexey Mikhaylov. 2023. "Neuromorphic Analog Machine Vision Enabled by Nanoelectronic Memristive Devices" Applied Sciences 13, no. 24: 13309. https://doi.org/10.3390/app132413309

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop