By Isaac Madan (email)
Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, you can find all past updates here. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further. If you’re a machine learning practitioner or student, join our Talent Network here to get exposed to awesome ML opportunities.
Detecting Pneumonia: CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning by Rajpurkar et al of Stanford ML Group. We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our model, CheXNet, is a 121-layer convolutional neural network that inputs a chest X-ray image and outputs the probability of pneumonia along with a heatmap localizing the areas of the image most indicative of pneumonia. Original paper here.
Detecting cracks in nuclear reactors: NB-CNN: Deep Learning-based Crack Detection Using Convolutional Neural Network and Naïve Bayes Data Fusion by Chen et al of Purdue. A system under development at Purdue University uses artificial intelligence to detect cracks captured in videos of nuclear reactors and represents a future inspection technology to help reduce accidents and maintenance costs.
Self learning robots: A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up. Embodied Intelligence will specialize in complex algorithms that allow machines to learn tasks on their own. Using these methods, existing robots could learn to, for example, install car parts that aren’t quite like the parts they have installed in the past, sort through a bucket of random holiday gifts as they arrive at a warehouse, or perform other tasks that machines traditionally could not. Founded by UC Berekeley professor Pieter Abbeel and former OpenAI researchers Peter Chen and Rocky Duan and the former Microsoft researcher Tianhao Zhang.
Face detection: An On-device Deep Neural Network for Face Detection by Computer Vision Machine Learning Team at Apple. Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps. We faced significant challenges in developing the framework so that we could preserve user privacy and run efficiently on-device. This article discusses these challenges and describes the face detection algorithm.
Palliative care: Improving Palliative Care with Deep Learning by Avati et al of Stanford ML / BMI. Using a deep neural network to identify patients who are likely to benefit from palliative care services and bring them to the attention of palliative care professionals at a hospital for better outreach.
Coding / algorithm design: DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers by Sethi et al. With an abundance of research papers in deep learning, reproducibility or adoption of the existing works becomes a challenge. We propose a novel extensible approach, DLPaper2Code, to extract and understand deep learning design flow diagrams and tables available in a research paper and convert them to an abstract computational graph. The extracted computational graph is then converted into execution ready source code in both Keras and Caffe, in real-time.
Call for an International Ban on the Weaponization of Artificial Intelligence by Ian Kerr, Geoff Hinton, Richard S. Sutton, Doina Precup and Yoshua Bengio. Open letter by leading AI researchers asking the Canadian government to urgently address the challenge of lethal autonomous weapons (often called “killer robots”) and to take a leading position against Autonomous Weapon Systems on the international stage at the upcoming UN meetings in Geneva.
Announcing TensorFlow Lite by Google TensorFlow team. T_ensorFlow’s lightweight solution for mobile and embedded devices. Enables low-latency inference of on-device machine learning models._ Lightweight, cross-platform, and fast.
SLING: A Natural Language Frame Semantic Parser by Michael Ringgaard and Rahul Gupta of Google.
Introducing TensorFlow Feature Columns by Google TensorFlow team. We’re devoting this article to feature columns — a data structure describing the features that an Estimator requires for training and inference. As you’ll see, feature columns are very rich, enabling you to represent a diverse range of data.
One Network to Solve Them All — Solving Linear Inverse Problems using Deep Projection Models by Chang et al of CMU. We propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems.
Software 2.0 by Andrej Karpathy of Tesla. The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. In contrast, Software 2.0 is written in neural network weights.
Considering TensorFlow for the Enterprise by Sean Murphy and Allen Leis of O’Reilly. Introduces deep learning from an enterprise perspective and offers an overview of the TensorFlow library and ecosystem. If your company is adopting deep learning, this report will help you navigate the initial decisions you must make — from choosing a deep learning framework to integrating deep learning with the other data analysis systems already in place — to ensure you’re building a system capable of handling your specific business needs.
Understanding Hinton’s Capsule Networks. Part I: Intuition and Part 2: How Capsules Work by Max Pechyonkin. A few weeks ago, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsules. In addition to that, the team published an algorithm, called dynamic routing between capsules, that allows to train such a network. In this post, I will explain why this new architecture is so important, as well as intuition behind it. In the following posts I will dive into technical details.
Capsule Networks (CapsNets) — Tutorial by Aurélien Géron. Explanation of CapsNets, a hot new architecture for neural networks, invented by Geoffrey Hinton, one of the godfathers of deep learning.
Feature Visualization by Google researchers. Explanation of feature visualization — how neural networks build up their understanding of images. There is a growing sense that neural networks need to be interpretable to humans. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them.
ICML 2017: A Review of Deep Learning Papers, Talks, and Tutorials by Satrajit Chatterjee of Two Sigma. A senior Two Sigma researcher provides an overview of some of the most interesting Deep Learning research from ICML 2017.
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto. Complete draft of 2nd edition. A deep, technical and extensive dive into reinforcement learning.
Power & Limits of Deep Learning by Yann LeCun, Director of AI Research at Facebook. LeCun’s talk at AI & the Future of Work conference. 36 min video.
Understanding LSTM and its diagrams by Shi Yan. Great diagrams explaining how an LSTM works.
By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you. If you’re a machine learning practitioner or student, join our Talent Network here to get exposed to awesome ML opportunities.
Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.
**Please tap or click “︎**❤” to help to promote this piece to others.