“Google’s Hand-Fed AI Now Gives Answers, Not Just Search Results” (Wired, Nov 29):
“These ‘sentence compression algorithms’ just went live on the desktop incarnation of the search engine. They handle a task that’s pretty simple for humans but has traditionally been quite difficult for machines. They show how deep learning is advancing the art of natural language understanding, the ability to understand and respond to natural human speech. ‘You need to use neural networks—or at least that is the only way we have found to do it,’ Google research product manager David Orr says of the company’s sentence compression work. ‘We have to use all of the most advanced technology we have.’” https://www.wired.com/2016/11/
"Deep Learning for Detection of Diabetic Eye Disease" (Google Research Blog November 29)
"Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. Unfortunately, medical specialists capable of detecting the disease are not available in many parts of the world where diabetes is prevalent. Working closely with doctors both in India and the US, we created a development dataset of 128,000 images which were each evaluated by 3-7 ophthalmologists from a panel of 54 ophthalmologists. This dataset was used to train a deep neural network to detect referable diabetic retinopathy. We then tested the algorithm’s performance on two separate clinical validation sets totalling ~12,000 images, with the majority decision of a panel 7 or 8 U.S. board-certified ophthalmologists serving as the reference standard. The results show that our algorithm’s performance is on-par with that of ophthalmologists. For example, on the validation set described in Figure 2, the algorithm has a F-score (combined sensitivity and specificity metric, with max=1) of 0.95, which is slightly better than the median F-score of the 8 ophthalmologists we consulted (measured at 0.91)." https://research.googleblog.com/2016/11/deep-learning-for-detection-of-diabetic.html?m=1
"9 Ways that Google Cloud Machine Learning can help businesses" (Google Blog, November 22) "Maximize job recruitment with Google Cloud Jobs API, Analyze images faster with Vision API and for 80% less cost, Analyze long form docs with Cloud Translation API, Explore, Natural Language Processing translates questions into useful formulas and offers up instant answers in Google Sheets, Graphical Processing Units (GPUs) are great for medical analysis, financial calculations, seismic/subsurface exploration, machine learning, video rendering, transcoding, scientific simulations and more., Quick Access to Google Drive on Android devices to easily and instantly access files, Smart Scheduling in G Suite can now schedule a time and book rooms with machine assistance that includes room suggestions based on previous bookings and time suggestions that account for conflicts easiest to resolve, such as recurring 1:1 meetings., Explore in Google Docs taps into Google’s search engine and machine intelligence to add suggestions based on content within documents. It recommends related topics, images and more for web and mobile docs creation., Format presentations faster: Explore in Google Slides adds ease and speed to creating the most presentable presentations, with design suggestions based on slide content." https://blog.google/topics/google-cloud/9-new-ways-google-cloud-machine-learning-can-help-businesses/
Machine learning | “This Algorithm May Well Save Your Eyesight” (Fortune, Nov 29): “How transformative can it be when you teach a computer to read images? Well, we’re getting an early glimpse of that this morning with the release of a JAMA paper by a team of Google researchers who trained a deep convolutional neural network to read photomicroscopic images of the backs of human eyes. Varun Gulshan, Lily Peng, and colleagues used a deep learning algorithm to study 128,175 retinal images drawn from patients in the U.S. and India that were later reviewed for diabetic retinopathy (DR) by a group of 54 U.S.-licensed ophthalmologists. DR is a condition in which the tiny blood vessels in the light-sensitive tissue that lines the back of the eye (the retina) deteriorate. Chronic high blood sugar can damage the vessels, causing them to bleed or leak fluid, which distorts vision and can lead to blindness—a risk of profound concern to 415 million people with diabetes around the world.” http://fortune.com/2016/11/29/
“Google Assistant now speaks Hindi in Allo messaging app” (VentureBeat, Dec 5): “Google is expanding the linguistic capabilities of the virtual assistant that powers its Allo messaging app with the news that it’s now conversant in Hindi. Google first announced the Google Assistant back in May, serving up a direct competitor to the likes of Microsoft’s Cortana, Apple’s Siri, and Amazon’s Alexa. Google’s incarnation is currently available to anyone via the Allo messaging app, which launched in September, though it is also integrated into the company’s Pixel smartphones and the Google Home wireless speaker. Allo represents Google’s effort to create a smart messaging app that helps you stay in touch with all your friends, while also helping you plan events and find information. It promises to 'keep your conversation going' through its intelligent assistant and offers a 'smart reply' feature that suggests responses to messages based on the context of the conversation.” http://venturebeat.com/2016/
“Google Deep Mind and Elon Musk open their AI platforms to researchers” (Engadget, Dec 5): “Artificial intelligence got a big push today as both Google and OpenAI announced plans to open-source their deep learning code. Elon Musk's OpenAI released Universe, a software platform that "lets us train a single [AI] agent on any task a human can complete with a computer." At the same time, Google parent Alphabet is putting its entire DeepMind Lab training environment codebase on GitHub, helping anyone train their own AI systems. DeepMind first burrowed into the public consciousness by defeating a world champion at the notoriously difficult game Go. However, to advance deep learning further, Alphabet says that such AI 'agents' require highly detailed environments to serve as laboratories for AI research. The company is now open-sourcing that environment, called DeepMind Lab, to any programmers that want to use it. ‘DeepMind Lab is a fully 3D game-like platform tailored for agent-based AI research,’ Alphabet said in a blog. The agent floats around the environment, levitating and moving via thrusters, with a virtual camera that can track around its ‘body.’” https://www.engadget.com/2016/
"Google Artificial Intelligence Whiz Describes Our Sci-Fi Future" (Fortune, November 26)
"The Google Brain research team has created over 1,000 so-called deep learning projects that have supercharged many of Google’s products over the past few years like YouTube, translation, and photos. With deep learning, researchers can feed huge amounts of data into software systems called neural nets that learn to recognize patterns within the vast information faster than humans. Human vision is trained mostly by unsupervised learning. You’re a small child and you observe the world, but occasionally you get a supervised signal where someone would say, “That’s a giraffe” or “That’s a car.” And that’s your natural mental model of the world in response to that small amount of supervised data you got. We need to use more of a combination of supervised and unsupervised learning. We’re not really there yet, in terms of how most of our machine learning systems work." http://fortune.com/2016/11/26/google-artificial-intelligence-jeff-dean/
Recent Comments