Purely AI News: For AI professionals in a hurry
July 18, 2020
New event-based learning algorithm 'E-Prop' inspired by the Human brain is more efficient than conventional Deep Learning
Computer scientists Robert Legenstein(left) and Wolfgang Maass(right) are working on new AI systems inspired by the functioning of the human brain and are more energy-efficient. Picture Credit: Lunghammer—TU Graz
One of the biggest obstacles to the widespread use of Artificial Intelligence (AI), particularly in mobile applications, is the high energy consumption of learning activities by artificial neural networks. The efficient transmission of information between neurons in the brain is one of the reasons our brain does it better. Neurons transmit brief electrical impulses (spikes) to other neurons — but only as much as is absolutely required, to conserve energy.

This principle has been adopted in the development of the new machine learning algorithm e-prop (short for e-propagation) by a working group led by two computer scientists Wolfgang Maass and Robert Legenstein of TU Graz. In the algorithm, the spikes only become active when required for the transmission of information in the network. To these less active networks, learning is a particular challenge, because it takes longer observations to decide which neuron connections boost network performance.

E-prop addresses this issue through a decentralized system adapted from the brain, in which each neuron records when its connections are used in a so-called e-trace (eligibility trace). The approach is approximately as effective as other known methods of learning which are the best and most comprehensive. Details have now been published in Nature Communications.

For many of the machine learning techniques currently in use, all network operations are processed centrally and offline to track every few steps on how the links were used during the calculations. This, however, requires a constant data transfer between the memory and the processors—one of the main reasons for current AI implementations' excessive energy consumption. On the other hand, e-prop works entirely online and does not even require separate memory in real operation — thus making learning much more energy-efficient.
LinkedIn



Aug. 2, 2020

Sample Factory, a new training framework for Reinforcement Learning slashes the level of compute required for state-of-the-art results

23
July 31, 2020

Intel joins hands with researchers from MIT and Georgia Tech to work on a code improvement recommendation system, develops "An End-to-End Neural Code Similarity System"

22
July 25, 2020

Google's tensorflow-lite framework for deep learning is now more than 2x faster on average, using operator fusion and optimizations for additional CPU instruction sets

21
July 23, 2020

Fawkes: An AI system that puts an 'invisibility cloak' on images so that facial recognition algorithms are not able to reveal identities of people without permission

20
July 22, 2020

Researchers from Austria propose an AI system that reads sheet music from raw images and aligns that to a given audio accurately

19
July 21, 2020

WordCraft: A Reinforcement Learning environment for enabling common-sense based agents

18
July 20, 2020

A designer who worked on over 20 commercial projects for a year turns out to be an AI built by the Russian design firm Art. Lebedev Studio

17
July 19, 2020

Microsoft is developing AI to improve camera-in-display technology for natural perspectives and clearer visuals in video calls

16
July 18, 2020

Microsoft and Zhajiang Univ. researchers create AI Model that can sing in several languages including both Chinese and English

15
July 17, 2020

Scientists from the University of California address the false-negative problem of MRI Reconstruction Networks using adversarial techniques

13
July 16, 2020

A new technique of exposing DeepFakes uses the classical signal processing technique of frequency analysis

12
July 16, 2020

New AI model By Facebook researchers can recognize five different voices speaking simultaneously, pushes state-of-the-art forward

11
July 15, 2020

Researchers from Columbia Univ. and DeepMind propose a new framework for Taylor Expansion Policy Optimization (TayPO)

10
July 14, 2020

Federated Learning is finally here; Presagen's new algorithm creates higher performing AI than traditional centralized learning

9
July 14, 2020

Fujitsu designed a new Deep Learning based method for dimensionality reduction inspired by compression technology

8
July 12, 2020

Databricks donates its immensely popular MLflow framework to the Linux Foundation

7
July 12, 2020

Microsoft Research restores old photos that suffer from severe degradation with a new deep learning based approach

6
July 12, 2020

Amazon launches a new AI based automatic code review service named CodeGuru

5
July 12, 2020

IBM launches new Deep Learning project: Verifiably Safe Reinforcement Learning (VSRL) framework

4
July 12, 2020

DevOps for ML get an upgrade with new open-source CI/CD library, "Continuous Machine Learning (CML)"

3
July 12, 2020

DeepMind's new open-sourced Reinforcement-Learning library, dm_control, packs a simple interface to common RL utilities

2
July 12, 2020

Learning to learn: Google's AutoML-Zero learns to evolve new ML algorithms from scratch

1