Home | HighTech | Google weaves AI and machine learning into core products at I/O 2017

Google weaves AI and machine learning into core products at I/O 2017

On Wednesday, at the 2017 Google I/O developer conference in Mountain View, CA, Google CEO Sundar Pichai said that the company is rethinking all of its products with a renewed focus on machine learning and artificial intelligence (AI).

One recent example of the company’s use of machine learning is in Google Home , the company’s smart speaker powered by Google Assistant , which uses deep learning to allow multiple users to share a single Google Home unit. Pichai also announced that the machine learning-driven Smart Reply feature is coming to Gmail on iOS and Android as well.

SEE: Google rolls out AI-powered Smart Reply for Gmail to iOS and Android

One of the big announcements at I/O was Google Lens, a set of vision-based computing capabilities that seeks to understand what a user is looking at with their smartphone’s camera, and help them take action based on that information. For example, a user can take a picture of a flower, and Lens will tell the user the kind of flower it is, Pichai said. Users will also be able to point their phone at a router, and it will connect them based on the given password. Google Lens will initially be rolled out to Google Assistant and Google Photos in the coming weeks.

At last year’s I/O, Pichai spoke about how computing was moving from mobile-first to AI-first , and that theme continued in 2017. Pichai said that Google is rethinking its computational architecture to build “AI-first data centers.”

More about Innovation

Another infrastructure initiative is the Cloud Tensor Processing Units (TPU ), optimized to boost machine learning workloads for both training and inference. The Cloud TPU is available on the Google Compute Engine now.

Google is also building out a collection of its AI efforts and teams in Google.ai, a central point for the firm’s research, tools, and applied AI. Additionally, Pichai said, Google is working on AutoML, a new software that uses neural networks to build other neural networks.

Google Assistant, Google’s AI-based voice assistant, is now available on the iPhone, and is coming to Android TV as well. Users can also now type requests to Google Assistant on their smartphone, if they don’t want to be overheard.

io17.jpg

Image: CNET

A new Google Assistant SDK will allow companies to build Google Assistant into whatever product they are building. At a glance, this seems like Google’s answer to Amazon Lex. Additionally, Actions on Google will now support transactions like payments, identity management, account creation, and more.

Google Home itself got four new updates: Proactive Assistance, hands-free calling, free Spotify integration, and visual responses on smartphones and TVs.

Machine learning will also power much of Google Photos. At I/O 2017, Google leaders announced machine learning-based features like Suggested Sharing, Shared Libraries, and the ability to make physical photo books out of a user’s images in Google Photos.

Dave Burke, Google’s vice president of engineering for Android, also took the stage at I/O to give an update on Android O. The upcoming Android OS version is based on two core features: Fluid Experiences and Vitals.

Fluid Experiences will bring picture-in-picture, notification dots, enhanced autofill for Android apps, and Smart Text Selection for easier copying and pasting, Burke said. Additionally, Google is launching TensorFlow Lite, a lightweight version of its open source machine library for applications, and a new neural network API.

Vitals is focused on keeping Android phones secure and healthy, working to maximize power and performance. One new feature called Google Play Protect scans apps to look for malicious code, Burke said. The Play Console Dashboard will analyze apps for battery drain, their tendency to cause crashes, and how they affect the speed of the UI. Additionally, Android Studio Profilers is another new tool that helps developers understand how their app is affecting the phone’s experience.

The coding language Kotlin is now officially supported by Android. And Google is launching a lightweight version of the OS called Android Go that optimizes the latest release of Android to work smoothly on entry-level devices, along with new lightweight apps and a custom version of the Play Store as well.

Standalone virtual reality (VR) headsets are coming soon to Google Daydream through partnerships between Google and manufacturers like HTC and Lenovo. Google is also launching new VR and AR experiences for education deployments as well.

Also see

Click here to Read from the source

antalya oto kiralama Google
x

Check Also

Video: How to stream Apple events from Safari and other platforms

Video: How to stream Apple events from Safari and other platforms – TechRepublichttps://tr3.cbsistatic.com/fly/bundles/techrepubliccore/js/libs/html5shiv.js Video: How ...

Shares
sunexpress