Google I/O 2021: 11 AI innovations to enrich our lives
Syah Ismail2024-09-27T17:24:50+08:00
Google kicked off its annual event, Google I/O 2021 on 18 May. Apart from the eagerly awaited Android 12 announcement, Google also showcased some of its AI projects that they are working on. Here are 11 of them:
1. LaMDA
Human conversations are surprisingly complex. They’re grounded in concepts we’ve learned throughout our lives; are composed of responses that are both sensible and specific, and unfold in an open-ended manner. LaMDA — short for “Language Model for Dialogue Applications” — is a machine learning model designed for dialogue and built on Transformer, a neural network architecture that Google invented and open-sourced. This early-stage research could unlock more natural ways of interacting with technology and entirely new categories of helpful applications. Learn more about LaMDA.
2. MUM
In 2019 Google launched BERT, a Transformer AI model that can better understand the intent behind your Search queries. Multitask Unified Model (MUM) is 1000x more powerful than BERT. It can learn across 75 languages at once (most AI models train on one language at a time) and it can understand information across text, images, video and more. Google is still in the early days of exploring MUM but the goal is that one day you’ll be able to type a long, information-dense, and natural-sounding query like “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” and more quickly find the relevant information you need. Learn more about MUM.
3. Project Starline
Imagine looking through a sort of magic window and you see another person, life-size, and in three dimensions. You can talk naturally, gesture and make eye contact. Project Starline is a technology project that combines advances in hardware and software to enable friends, family and co-workers to feel together, even when they’re cities (or countries) apart. To create this experience, Google is applying research in computer vision, machine learning, spatial audio and real-time compression. Google has developed a light field display system that creates a sense of volume and depth without needing additional glasses or headsets. It feels like someone is sitting just across from you like they’re right there. Learn more about Project Starline.
4. The world’s first useful, error-corrected quantum computer
Confronting many of the world’s greatest challenges, from climate change to the next pandemic, will require a new kind of computing. A useful, error-corrected quantum computer will allow us to mirror the complexity of nature, enabling us to develop new materials, better batteries, more effective medicines and more. Google’s new Quantum AI campus — home to research offices, a fabrication facility, and Google’s first quantum data centre — will help Google build that computer before the end of the decade. Learn more about our work on the Quantum AI campus.
5. Maps will help reduce hard-braking moments
Soon, Google Maps will use machine learning to reduce your chances of experiencing hard-braking moments — incidents where you slam hard on your brakes, caused by things like sudden traffic jams or confusion about which highway exit to take.
When you get directions in Maps, Google calculates your route based on a lot of factors, like how many lanes a road has or how direct the route is. With this update, Google will also factor in the likelihood of hard-braking. Maps will identify the two fastest route options for you, and then automatically recommend the one with fewer hard-braking moments (as long as your ETA is roughly the same). These changes have the potential to eliminate over 100 million hard-braking events in routes driven with Google Maps each year. Learn more about our updates to Maps.
6. Personalised Memories in Google Photos
With Memories, you can already look back on important photos from years past or highlights from the last week. Using machine learning, Photos will soon be able to identify the less-obvious patterns in your photos. Starting later this summer, when Photos finds a set of three or more photos with similarities like shape or colour, it will highlight these little patterns for you in your Memories. For example, Photos might identify a pattern of your family hanging out on the same couch over the years — something you wouldn’t have ever thought to search for but that tells a meaningful story about your daily life. Learn more about our updates to Google Photos.
7. Cinematic moments
When you’re trying to get the perfect photo, you usually take the same shot two or three (or 20) times. Using neural networks, Google can take two nearly identical images and fill in the gaps by creating new frames in between. This creates vivid, moving images called Cinematic moments.
Producing this effect from scratch would take professional animators hours but with machine learning, Photos can automatically generate these moments and bring them to your Recent Highlights. Best of all, you don’t need a specific phone; Cinematic moments will come to everyone across Android and iOS. Learn more about Cinematic moments in Google Photos.
8. New writing features in Google Workspace
In Google Workspace, assisted writing will suggest more inclusive language when applicable. For example, it may recommend that you use the word “chairperson” instead of “chairman” or “mail carrier” instead of “mailman.” It can also give you other stylistic suggestions to avoid passive voice and offensive language which can speed up editing and help make your writing stronger. Learn more about our updates to Workspace.
9. Shopping Graph
To help shoppers find what they’re looking for, Google needs to have a deep understanding of all the products that are available, based on information from images, videos, online reviews and even inventory in local stores. Enter the Shopping Graph: Google’s AI-enhanced model tracks products, sellers, brands, reviews, product information and inventory data — as well as how all these attributes relate to one another. With people shopping across Google more than a billion times a day, the Shopping Graph makes those sessions more helpful by connecting people with over 24 billion listings from millions of merchants across the web. Learn how we’re working with merchants to give you more ways to shop.
10. Dermatology assist tool
Each year we see billions of Google Searches related to skin, nail and hair issues but it can be difficult to describe what you’re seeing on your skin through words alone.
With Google’s CE marked AI-powered dermatology assist tool, a web-based application that will be available for early testing in the EU later this year, it’s easier to figure out what might be going on with your skin. Simply use your phone’s camera to take three images of the skin, hair or nail concern from different angles. You’ll then be asked questions about your skin type, how long you’ve had the issue and other symptoms that help the AI to narrow down the possibilities. The AI model analyses all of this information and draws from its knowledge of 288 conditions to give you a list of possible conditions that you can then research further. It’s not meant to be a replacement for diagnosis but rather a good place to start. Learn more about our AI-powered dermatology assist tool.
11. Improved tuberculosis screening
Tuberculosis (TB) is one of the leading causes of death worldwide, infecting 10 million people per year and disproportionately impacting people in low-to-middle-income countries. It’s also really tough to diagnose early because of how similar symptoms are to other respiratory diseases. Chest X-rays help with diagnosis but experts aren’t always available to read the results. That’s why the World Health Organization (WHO) recently recommended using technology to help with screening and triaging for TB. Researchers at Google are exploring how AI can be used to identify potential TB patients for follow-up testing, hoping to catch the disease early and work to eradicate it. Learn more about our ongoing research into tuberculosis screening.