* These results are theoretical based on RC constant.
** This result is expected, a 2-1 FFNN is required for XOR.
Conclusion
Realization of Analog Feed Forward Neural Network
Hardware accelerated machine learning driven by software
Inherent scalability
References
P. Auer, H. Burgsteiner, and W. Maass, “A learning rule for
very simple universal approximators consisting of a single
layer of perceptrons,” Neural Networks, vol. 21, no. 5,
pp. 786 – 795, 2008.
H. P. Graf and L. D. Jackel, “Analog electronic neural
network circuits,” IEEE Circuits and Devices Magazine,
vol. 5, pp. 44–49, July 1989.
M. Ueda, Y. Nishitani, Y. Kaneko, and A. Omote, “Back-
propagation operation for analog neural network hardware
with synapse components having hysteresis characteris-
tics,” PLOS ONE, vol. 9, pp. 1–10, November 2014.
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis,
J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard,
M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G.
Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden,
M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: A system for
large-scale machine learning,” in Proceedings of the 12th
USENIX Conference on Operating Systems Design and
Implementation, OSDI’16, (Berkeley, CA, USA), pp. 265–
283, USENIX Association, 2016.
E. Rosenthal, S. Greshnikov, D. Soudry, and S. Kvatinsky,
“A fully analog memristor-based neural network with
online gradient training,” in 2016 IEEE International Sym-
posium on Circuits and Systems (ISCAS), pp. 1394–1397,
May 2016.