Purely AI News: For AI professionals in a hurry
Aug. 2, 2020
Sample Factory, a new training framework for Reinforcement Learning slashes the level of compute required for state-of-the-art results
An RL agent trained with Sample Factory plays the videogame Doom using the same controls available to Humans. Source: https://youtu.be/i-mqY7xFuG0
The massive computing resources needed to train state-of-the-art artificial intelligence systems mean that well-heeled tech companies are leaving the academic teams in the dust. This was also shown by a 2018 OpenAI study which found that the computing power used to train the most powerful AI increases at an extraordinarily fast rate, doubling every 3.4 months. Now a University of Southern California team with Intel Labs has built a way to train deep-reinforcement learning (RL) algorithms on hardware commonly available to academic researchers. This new method for training AI algorithms may reduce the computational demands, providing underfunded academic researchers with an economical way to train models.
The team explains how they could use a single high-end computer to train a model with state-of-the-art results on the first-person shooter videogame Doom, in a paper presented at this year's International Conference on Machine Learning (ICML).
Instead of running all three major computation jobs at once (simulating environment, deciding on next action based on rules, using results to update the model), the system, called Sample Factory, divides them into separate pieces, allocating resources as required. “From my experience, a lot of researchers don’t have access to cutting-edge, fancy hardware,” says Petrenko. “We realized that just by rethinking in terms of maximizing the hardware utilization you can actually approach the performance you will usually squeeze out of a big cluster even on a single workstation.”
They also tackle DeepMind Lab, a suite of 30 unique 3D challenges for RL benchmarking, using a fraction of the normal computing power!
The researchers were able to process approximately 140,000fps (twice the next best approach) using a single computer equipped with a 36-core CPU and only one GPU while working on Atari videogames and Doom. The proposed architecture combines a highly efficient, asynchronous, GPU-based sampler with off-policy correction techniques, allowing it to achieve throughput higher than 100,000 environment frames per second on non-trivial player control in 3D without sacrificing sample efficiency. As shown by the researchers in their paper, Sample Factory can also be extended to support self-play and population-based training and apply those techniques to train highly capable agents for a multiplayer first-person shooter game.

The paper notes that they chose the most challenging scenario that exists in first-person shooter games – a duel. Unlike multiplayer deathmatch, which tends to be chaotic, the duel mode requires strategic reasoning, positioning, and spatial awareness. Despite the fact that the agents trained with Sample Factory were able to convincingly defeat scripted in-game bots of the highest difficulty, they are not yet at the level of expert human players. One of the advantages human players have in a duel is the ability to perceive sound. An expert human player can hear the sounds produced by the opponent (ammo pickups, shots fired, etc.) and can integrate these signals to determine the opponent’s position. While other recent work has been beaten human players in certain parts of such games, but the task of defeating expert competitors in a full game, as played by humans, would require fusing information from multiple sensory systems.

The code for Sample Factory is available on GitHub.

July 31, 2020

Intel joins hands with researchers from MIT and Georgia Tech to work on a code improvement recommendation system, develops "An End-to-End Neural Code Similarity System"

July 25, 2020

Google's tensorflow-lite framework for deep learning is now more than 2x faster on average, using operator fusion and optimizations for additional CPU instruction sets

July 23, 2020

Fawkes: An AI system that puts an 'invisibility cloak' on images so that facial recognition algorithms are not able to reveal identities of people without permission

July 22, 2020

Researchers from Austria propose an AI system that reads sheet music from raw images and aligns that to a given audio accurately

July 21, 2020

WordCraft: A Reinforcement Learning environment for enabling common-sense based agents

July 20, 2020

A designer who worked on over 20 commercial projects for a year turns out to be an AI built by the Russian design firm Art. Lebedev Studio

July 19, 2020

Microsoft is developing AI to improve camera-in-display technology for natural perspectives and clearer visuals in video calls

July 18, 2020

Microsoft and Zhajiang Univ. researchers create AI Model that can sing in several languages including both Chinese and English

July 18, 2020

New event-based learning algorithm 'E-Prop' inspired by the Human brain is more efficient than conventional Deep Learning

July 17, 2020

Scientists from the University of California address the false-negative problem of MRI Reconstruction Networks using adversarial techniques

July 16, 2020

A new technique of exposing DeepFakes uses the classical signal processing technique of frequency analysis

July 16, 2020

New AI model By Facebook researchers can recognize five different voices speaking simultaneously, pushes state-of-the-art forward

July 15, 2020

Researchers from Columbia Univ. and DeepMind propose a new framework for Taylor Expansion Policy Optimization (TayPO)

July 14, 2020

Federated Learning is finally here; Presagen's new algorithm creates higher performing AI than traditional centralized learning

July 14, 2020

Fujitsu designed a new Deep Learning based method for dimensionality reduction inspired by compression technology

July 12, 2020

Databricks donates its immensely popular MLflow framework to the Linux Foundation

July 12, 2020

Microsoft Research restores old photos that suffer from severe degradation with a new deep learning based approach

July 12, 2020

Amazon launches a new AI based automatic code review service named CodeGuru

July 12, 2020

IBM launches new Deep Learning project: Verifiably Safe Reinforcement Learning (VSRL) framework

July 12, 2020

DevOps for ML get an upgrade with new open-source CI/CD library, "Continuous Machine Learning (CML)"

July 12, 2020

DeepMind's new open-sourced Reinforcement-Learning library, dm_control, packs a simple interface to common RL utilities

July 12, 2020

Learning to learn: Google's AutoML-Zero learns to evolve new ML algorithms from scratch