Loading

Category: The Latest

1355 posts
On
Posted on
in category

Google’s WaveNet machine learning-based speech synthesis comes to Assistant

174 views
Comments are closed Google’s WaveNet machine learning-based speech synthesis comes to Assistant Comments are closed

Last year, Google showed off WaveNet, a new way of generating speech that didn’t rely on a bulky library of word bits or cheap shortcuts that result in stilted speech. WaveNet used machine learning to build a voice sample by sample, and the results were, as I put it then, “eerily convincing.” Previously bound to the lab, the tech has now been deployed in the latest version of Google Assistant.

The general idea behind the tech was to recreate words and sentences not by coding grammatical and tonal rules manually, but allowing a machine learning system to see those patterns in speech and generate them sample by sample. A sample, in this case, being the tone generated every 1/16,000th of a second.

At the time of its first release, WaveNet was extremely computationally expensive, taking a full second to generate 0.02 seconds of sound — so a two-second clip like “turn right at Cedar street” would take nearly two minutes to generate. As such, it was poorly suited to actual use (you’d have missed your turn by then) — which is why Google engineers set about improving it.

The new, improved WaveNet generates sound at 20x real time — generating the same two-second clip in a tenth of a second. And it even creates sound at a higher sample rate: 24,000 samples per second, and at 16 versus 8 bits. Not that high-fidelity sound can really be appreciated in a smartphone speaker, but given today’s announcements, we can expect Assistant to appear in many more places soon.

The voices generated by WaveNet sound considerably better than the state of the art concatenative systems used previously:

Old and busted:


New and hot:

(More samples are available at the Deep Mind blog post, though presumably the Assistant will also sound like this soon.)

WaveNet also has the admirable quality of being extremely easy to scale to other languages and accents. If you want it to speak with a Welsh accent, there’s no need to go in and fiddle with the vowel sounds yourself. Just give it a couple dozen hours of a Welsh person speaking and it’ll pick up the nuances itself. That said, the new voice is only available for U.S. English and Japanese right now, with no word on other languages yet.

In keeping with the trend of “big tech companies doing what the other big tech companies are doing,” Apple, too, recently revamped its assistant (Siri, don’t you know) with a machine learning-powered speech model. That one’s different, though: it didn’t go so deep into the sound as to recreate it at the sample level, but stopped at the (still quite low) level of half-phones, or fractions of a phoneme.

The team behind WaveNet plans to publish its work publicly soon, but for now you’ll have to be satisfied with their promises that it works and performs much better than before.

Article Source

 

On
Posted on
in category

Trying out Google’s ‘Stranger Things’ AR stickers on the new Pixel 2

237 views
Comments are closed Trying out Google’s ‘Stranger Things’ AR stickers on the new Pixel 2 Comments are closed

Google has new AR stickers coming to its Pixel devices, with rollout set to begin later this year. They use Google’s ARCore, which is designed as an easy way for Android developers to bring augmented reality experiences to device users.

The AR stickers are built right into the camera app on the Pixel (when they make their way out via software update) and they can be selected like any other image capturing modes. You then just drop them into the scene you want to capture with your camera in live preview, and you can edit their size and position.

Stickers are animated, and can interact with each other. I took a picture with the Demogorgon and with Eleven, and I also tested out positioning the stickers and walking around them. They really seemed to stick in place well, and actually worked in person better than the shaky on stage demo would’ve led me to believe.

These are a fun feature that’ll probably get a lot of use via promotional tie-ins like this one, but their long-term usefulness still remains to be seen.

Article Source

 

On
Posted on
in category

Hands-on with Google’s Pixel 2 XL

160 views
Comments are closed Hands-on with Google’s Pixel 2 XL Comments are closed

Google’s new Pixel 2 XL phone is a major upgrade in a number of ways. It’s a super sleek device, with a stunning front face design and a remarkably bright and vibrant display that renders colors in a trulls stunning way. The phone feels light and yet durable, and has a camera that even in very limited testing, feels like it’ll easily be a category leader.

The Pixel 2 XL’s display occupies most of its front face, allowing for a generous 6-inch diagonal with plenty of screen real estate. The QHD resolution renders both texts and images with crisp detail – which is great news because the camera captures images with a sharpness that displays well on this new phone screen, which used pOLED tech for deep blacks, but also has a wide color gamut for better rendering.

  1. Pixel 2 XL

  2. Pixel 2 XL

  3. Pixel 2 XL rear

  4. Pixel 2 XL camera

  5. Pixel 2 XL camera

  6. Google Assistant Pixel 2 XL

It’s worth mentioning again how light the phone feels in the hand – it’s surprising compared to the current generation Pixel XL, which feels like a thick, heavy brick by comparison. But it’s also still aluminum, so the all-metal body (minus the glass panel at the top) feels premium despite its lightness.

Reviewers often joke about ‘hand feel,’ but it really does feel like a great device to hold. This actually has a functional impact, too, since the other new trick here is the Pixel 2 XL’s squeezability. It allows a user to trigger Assistant by pushing in on the sides of the phone while gripping it, and it works really well. You can adjust how sensitive the phone is to this, with multiple levels selectable, and it becomes second nature really quickly. Plus, it’s a great way to trigger Assistant without having to say “Okay Google” out loud in a crowded environment.

The phone also has a terrific camera, as mentioned. I shot a number of pictures with it, and it nailed the focus on each one. Color was also great, as was detail reproduction, despite challenging lighting in the demo area.

The portrait mode in particular worked great – it triggered instantly, with no waiting or recomposition required, and it managed to deliver great background blur and a sense of true depth of field without going overboard, and while preserving tricky details like individual strands of hair on the subject. This is Google’s machine learning magic at work, and it’s very impressive when compared to the competition out there, including from Apple on the new iPhone 8.

Overall, the phone also felt super fast and responsive, and actions that used to have some small amount of delay are near instant here. We’ll have to test other elements like battery life in longer reviews, but based on first impressions, this is a truly impressive device from Google, especially for its second ever premium in-house smartphone effort.

Article Source