Yahoo will take on Apple Intelligence and Google Gemini with its own AI features, in a move that will definitely make it relevant again

Guess what: Yahoo Mail is alive and well in the year 2024, and has begun adding new AI capabilities to your inbox to to simplify your emails and improve your overall task management. It’s a big week for AI considering Apple also announced Apple Intelligence at WWDC 24 – and it looks like Yahoo Mail is diving right into the world of AI with the same focus of productivity and digital assistance.

You may be surprised to hear the words ‘Yahoo Mail’ and ‘artificial intelligence’ strung together in the same sentence – rightfully so. While there are probably still a lot of people who haven’t switched up since the earlier 2000s, or people who use it to filter out spam, I can’t say I’ve seen or been emailed by anyone with an @yahoo.com email address ever. 

So, it’s safe to say that Yahoo’s push to include AI tech is likely aimed at trying to get more people to use the email client – and it might very well work. Whether you’re nostalgic for the good ol’ days or just looking to start fresh with a clean email hub that can offer you generative text assistance, personal context, and more, why wouldn’t you try Yahoo Mail? 

I’ve already made my account 

Unfortunately, the beta for Yahoo Mail AI is only available for US-based accounts, but I’m sure that will open up in the near future. In terms of some of the features to look forward to, you’ll have access to AI-generated summaries in a bullet point list, which you can find under a new tab called the ‘Priority Inbox’. So, Yahoo AI will highlight what it believes to be the most important information to you based on content and previous context from your general emailing habits. 

You’ll also have access to a ‘Quick Action’ button so you can add an event to your calendar, check-in for flights, and even track packages on their way over to you. 

However great these features are, there’s one big new change that’s cool enough to sway me over to Yahoo Mail. You’ll soon be able to link your Yahoo inbox to other email accounts like Gmail, and Microsoft Outlook so you can send and receive all your emails right from Yahoo Mail. So, if you want access to Gmail's sophisticated AI tools without having to pay, Yahoo Mail might be worth switching to! 

You might also like…

TechRadar – All the latest technology news

Read More

Google Maps is about to get a big privacy boost, but fans of Timeline may lose their data

One of Google Maps most popular features, Timeline, is about to become a lot more secure. To give you a quick refresher, Timeline acts as a diary of sorts that keeps track of all the past routes and trips you’ve taken. It’s a fun way to relive memories. 

Utilizing this tool requires people to upload their data to company servers for storage. That will change later this year though, as according to a recent email obtained by Android Police, Google will soon be keeping Timeline data on your smartphone.

Migrating Maps data over to localized device storage would greatly improve security as you won’t be forced to upload sensitive information to public servers anymore. However, due to the upcoming change, Google has decided to kill off Timeline for Web. Users have until December 1, 2024, to move everything from the online resource to their phone’s storage drive. Failure to take any action could result in losing valuable data, like moments from your timeline. 

“Google will try moving up to 90 days of Timeline data to the first signed-in device” after the cutoff date. However, anything older than 90 days will be deleted and it's important to take note of the wording. They’ll “try” to save as much as they can, meaning there is no guarantee Google will successfully migrate everything over if you miss the deadline. It’s unknown why this is the case, although we did ask.

Configuring Timeline

The company is asking people to review their Google Maps settings and choose which device will house their “saved visits and routes.” Their email offers a link to the app’s settings menu, but if you didn’t get the message you can navigate to Google Maps on your mobile device to make the changes there. It’s how we did it.

First, update Google Maps if you haven’t done so already, and then go to the Timeline section, where you’ll be greeted with a notification informing you of forthcoming changes. 

Then, click the Next button, and a new window will appear asking you how long you would like to keep your data. You can select to store the information until you get rid of it or set up an auto-delete function. Users can have Google Maps trash their Timeline after three, 18, or 36 months have passed.

Google Maps' new Timeline menu

(Image credit: Future)

Additionally, you can choose to back them up to Google servers. Android Police explains that this revamped system curates Maps Timelines for each device “independently.” So, if you buy a new smartphone and want to restore your data, using the backup tool is the best way.

What’s interesting is that the Timeline transfer is a one-way street. Google states in a Maps Help page that after the data is moved to your smartphone, you cannot revert back to the previous method. We experienced this firsthand because we couldn’t find a way to upload data to company servers outside of the backup function after localizing storage.

Don’t worry if you haven’t received the email or the Google Map patch as of yet. Android Police says the company is slowly rolling out the changes. Be sure to keep an eye out for either one.

While we have you check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google explains why AI Overviews couldn’t understand a joke and told users to eat one rock a day – and promises it’ll get better

If you’ve been keeping up with the latest developments in the area of generative AI, you may have seen that Google has stepped up the rollout of its ‘AI Overviews’ section in Google Search to all of the US.

At Google I/O 2024, held on May 14, Google confidently presented AI Overviews as the next big thing in Search that it expected to wow users, and when the feature finally began rolling out the following week it received a less than enthusiastic response. This was mainly due to AI Overviews returning peculiar and outright wrong information, and now, Google has responded by explaining what happened and why AI Overviews performed the way it did (according to Google). 

The feature was intended to bring more complex and better-verbalized answers to user queries, synthesizing a pool of relevant information and distilling it into a few convenient paragraphs. This summary would then be followed by the listed blue links with brief descriptions of the websites that we’re used to. 

Unfortunately for Google, screenshots of AI Overviews that provided strange, nonsensical, and downright wrong information started circulating on social media shortly after the rollout. Google has since pulled the feature, and published an explanatory post on its ‘Keyword’ blog to explain why AI Overviews was doing this, as mentioned – being quick to point out that many of these screenshots were faked. 

What AI Overviews were intended to be

Keynote speech at Google i/o 2024

(Image credit: Future)

In the blog post, Google first explains that the AI Overviews were designed to collect and present information that you would have to dig further via multiple searches to find out otherwise, and to prominently include links to credit where the information comes from, so you could easily follow up from the summary. 

According to Google, this isn’t just its large language models (LLMs) assembling convincing-sounding responses based on existing training data. AI Overviews is powered by its own custom language model that integrates Google’s core web ranking systems, which are used to carry out searches and integrate relevant and high-quality information into the summary. Accuracy is one of the cornerstones that Google prides itself on when it comes to search, the company notes, saying that it built AI Overviews to show information that’s sourced only from the web results it deems the best. 

This means that AI Overviews are generally supposed to hallucinate less than other LLM products, and if things happen to go wrong, it’s probably for a reason that Google also faces when it comes to search, giving the possible issues as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

What actually happened during the rollout

Windows 10 dual screen

(Image credit: Shutterstock / Dotstock)

Google goes on to state that AI Overviews was optimized for accuracy and tested extensively before its wider rollout, but despite these seemingly robust testing efforts, Google does admit that’s not the same as having millions of people trying out the feature with a flood of novel searches. It also points out that some people were trying to provoke its search engine into producing nonsensical AI Overviews by carrying out ridiculous searches. 

I find this part of Google’s explanation a bit odd, seeing as I’d imagine that when building such a feature as AI Overviews, the company would appreciate that folks are likely to try to break it, or send it off the rails somehow, and that it should therefore be designed to handle silly or nonsense searches in its stride.

At any rate, Google then goes on to call out fake screenshots of some of the nonsensical and humorous AI Overviews that made their way around the web, which is fair I think. It reminds us we shouldn’t believe everything we see online, of course, although the faked screenshots looked pretty good if you didn't scrutinize them too closely (and all this underscores the need to check AI-generated features, anyway).

Google does admit, though, that sometimes AI Overviews did produce some odd, inaccurate, or unhelpful responses. It elaborates by explaining that there are multiple reasons why these happened, and that this whole episode has highlighted specific areas where AI Overviews could be improved.

The tech company further observes that these questionable AI Overviews would appear on searches for queries that didn’t happen often. A Threads user, @crumbler, posted an AI Overviews screenshot that went viral after they asked Google: “how many rocks should i eat?” This returned an AI Overview that recommended eating at least one small rock per day. Google’s explanation is that before this screenshot circulated online, this question had rarely been asked in search (which is certainly believable enough). 

A screenshot of an AI Overview recommending that humans should eat one small rock a day

(Image credit: Google/@crumbler on Threads)

Google continues to explain that there isn’t a lot of quality source material to answer that question seriously, either, calling instances when this happens a “data void” or an “information gap.” Additionally, in the case of the query above, some of the only content that was available was satirical by nature, and was linked in earnest as one of the only websites that addressed the query. 

Other nonsensical and silly AI Overviews pulled details from sarcastic or humorous content sources, and the likes of troll posts from discussion forums.

Google's next steps and the future of AI Overviews

When explaining what it’s doing to fix and improve AI Overviews, or any part of its Search results, Google notes that it doesn’t go through Search results pages one by one. Instead, the company tries to implement updates that affect whole sets of queries, including possible future queries. Google claims that it’s been able to identify patterns when analyzing the instances where AI Overviews got things wrong, and that it’s put in a whole set of new measures to continue to improve the feature.

You can check out the full list in Google’s post, but better detection capabilities for nonsensical queries trying to provoke a weird AI Overview are being implemented, and the search giant is looking to limit the inclusion of satirical or humorous content.

Along with the new measures to improve AI Overviews, Google states that it’s been monitoring user feedback and external reports, and that it’s taken action on a small number of summaries that violate Google’s content policies. This happens pretty rarely – in less than one in seven million unique queries, according to Google – and it’s being addressed.

The final reason Google gives for why AI Overviews performed this way is just the sheer scale of the billions of queries that are performed in Search every day. I can’t say I fault Google for that, and I would hope it ramps up the testing it does on AI Overviews even as the feature continues to be developed.

As for AI Overviews not understanding sarcasm, this sounds like a cop-out at first, but sarcasm and humor in general is a nuance of human communication that I can imagine is hard to account for. Comedy is a whole art form in itself, and this is going to be a very thorny and difficult area to navigate. So, I can understand that this is a major undertaking, but if Google wants to maintain a reputation for accuracy while pushing out this new feature – it’s something that’ll need to be dealt with.

We’ll just have to see how Google’s AI Overviews perform when they are reintroduced – and you can bet there’ll be lots of people watching keenly (and firing up yet more ridiculous searches in an effort to get that viral screenshot).

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Is this the return of Google Glass? Magic Leap and Google team up could be bad news for Meta and Apple

We already knew Google was making at least a tentative return to the world of extended reality (XR, a catchall for VR, AR, and MR) with the announcement it’s helping to make the Samsung XR headset. But it could also be looking for another try and with one of its most high-profile flops: Google Glass.

This speculation follows an announcement from Magic Leap that it is partnering with Google to “bring Magic Leap Augmented Reality (AR) expertise and optics leadership together with Google’s technology platforms and collaborate on Extended Reality (XR) solutions and experiences.”

This is hardly a Google Glass confirmation, but it follows a few rumors that Google wants to have another crack at AR specs – including what might have been an accidental leak from its own Google I/O presentation. It also comes after Meta teased its AR glasses project, and with Apple testing the waters with the Vision Pro it would seem the entire industry is chasing an AR trend.

See more

Even though this partnership is seemingly set in stone we shouldn’t get our hopes up that Google Glass 2 is coming soon. LG and Meta announced plans to partner on XR technology, only for rumors to come out weeks later that they had already parted ways – rumors LG refused to dismiss.

Much like Meta and LG reportedly butted heads in several ways, Google and Magic Leap could also disagree on how best to create an AR device which could lead to their relationship breaking down. 

What could a ‘Google Glass 2’ look like? 

Assuming this partnership does bear fruit, what do we expect to see from Google Glass 2 – or whatever Google wants to call it?

Well design-wise we imagine it’ll look a lot more like a typical pair of specs. While Google Glass’ space-age design might have charmed some, it’s not at all subtle. The obvious camera freaked people out, and it painted a target on wearers as they were clearly wearing expensive tech that would-be thieves could rip from them. And when the battery dies, they’re useless.

Modern smart and AR glasses have some signs they’re more than meets the eye – like thicker arms, and a light that makes it clear when the camera is in use – but in general you wouldn’t know the specs were anything but regular shades unless you’re well-versed in tech. With prescription or shaded lenses, they’re also something you can wear all the time even when they run out of charge.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban Meta Smart Glasses (Image credit: Meta)

As for its features, obviously, AR aspects would be included. To us, this means a HUD with an overlay showing you things like directions pointing you towards your destination, and apps that have you interact with virtual objects as if they’re in the real world.

But the other big feature will most likely be AI. We’ve already seen how the Ray-Ban Meta smart glasses can leverage cameras and its AI to help you identify objects, find recipes, translate signs, and generally answer questions you might have by simply talking to your specs. Google also has a generative AI – Gemini – and while its recent attempts at AI search haven’t been the best, we’d be shocked if this tech wasn’t incorporated into Google Glass 2.

We’ll have to wait and see what Google’s next AR device has in store for us if and when it launches. You can be sure we’ll be ready to give you the lowdown as soon as we have it.

You might also like

TechRadar – All the latest technology news

Read More

Google Pay gets 3 handy new features that could save you time and money

Google Pay is receiving three new features that collectively aim to make online shopping easier and more transparent. At first, it may seem strange how the tech giant is updating Google Pay when the app is scheduled to go offline on June 4 in the United States. 

However, it turns out the patch is rolling out to the Google Pay payment system rather than to the app itself. The Google Pay app is still set to be discontinued in about two weeks from the time of this writing. You’ll see the following changes appear on desktop and mobile.

According to their announcement post, the company states “American Express and Capital One cardholders” will now see the benefits they can receive when checking out on Chrome desktop in the “autofill drop-down” menu. Google gives the example of someone buying a round-trip flight from Los Angeles to San Francisco. Your American Express Gold Card may offer three times the travel points, while a Capital One Quicksilver Card will give you “1.5 percent cash back on [your] purchase.” There are plans to add “more cards in the future” as well.

Google Pay Card Benefits in Autofill

(Image credit: Google)

Next, the buy now, pay later (BNPL) payment option is expanding to more “merchant sites and Android apps across the US.” Google appears to be working with two BNPL services, Affirm and Zip, to make the expansion possible. Exactly which websites and apps are unknown, and Google didn't provide any additional details in the post, although we did ask.

Autofill update

The first two features are exclusive to people in the United States; however, the Autofill update is seeing an international release. Moving forward, shoppers on either Chrome or Android can use biometrics or their screen lock PIN to verify card details. With this, you'll no longer have to enter your security code manually.

Google Pay - Autofill update

(Image credit: Google)

Autofill will normally work without a hitch, but Google states if it detects suspicious transactions, it’ll prevent payments from going through. Also, users can “set up device unlock” to have Google Pay ask you to unlock your smartphone to reveal “full card details.” It ensures your card isn’t used by other people who might have access to your device.

Be sure to keep an eye out for the patch when it arrives. The Google Pay update is currently rolling out. While we have you, be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google is giving Android users hands-free navigation and a way to talk with emojis

Google is rolling out several new accessibility-focused features to platforms like Android and ChromeOS, timed to Global Accessibility Awareness Day, May 16. Leading the long list is the arrival of Project Gameface on Android

If you’re unfamiliar, Gameface is software that lets people use “head movement and facial gestures” to navigate a computer UI. Up until now, the software was used to help people with disabilities play video games among other things. But with its inclusion on Android, those same groups now have a new way to control their smartphone. 

The company states that Gameface supports 52 different facial gestures that can be mapped to specific functions. For example, looking to the left can be used to select items on the screen, while raising your eyebrows can send you back to the home screen. The individual controls depend on how people set up Gameface.

Project Gameface on Android

(Image credit: Google)

Also, it’ll be possible to adjust the sensitivity of a function to establish “how prominent your gesture has to be in order to” register an input. A slight open mouth can be attached to one action, while a wider open mouth can work for another. Over in the bottom corner will be a live camera feed of yourself. Google states their team added the view so users can make sure they’re making accurate facial gestures.

Project Gameface is open-sourced and available for download on Github complete with instructions on how to set it up. Do note it requires the Android Studio developer tool to configure it so you may need someone to help you out.

Notable features

The rest of the features in the update may not be as individually impactful as Gameface, but together, they become greater than the sum of its parts. Google’s Lookout app is receiving a new Find mode to help blind people locate real-world objects across seven different categories. It can tell where the tables are in a restaurant or where the door to the bathroom is. Users have to hold their smartphone in front of them, and through the rear camera, Lookout’s AI will tell you the “direction and distance” of an item or exit. Keep in mind, Find mode is in beta so it may be a little buggy.

Google Maps is seeing a similar upgrade, and it’ll soon provide more details about the area around you. The app will tell you the names of nearby places and how far you need to go to reach your destination.

Lookout app's new Find mode

(Image credit: Google)

Next, Android’s Look to Speak is adding a text-free mode. This mode lets you communicate with the app’s speech function by selecting emojis, symbols, and images. For example, a hand-waving emoji can be used to say “Hello.”

Chromebooks are set to receive their own accessibility patch, too. Google is giving owners a way to increase the size of the mouse cursor, and the screen magnifier tool will follow along with the words as you read them. 

Those are all the major updates coming to the Google platform; however, it’s just the tip of the iceberg. Other small upgrades include Google Maps on desktops, pointing out wheelchair-accessible entrances. Everything mentioned here is already live except for the Chromebook changes, which will roll out within the coming weeks.

Google isn't the tech giant celebrating Global Accessibility Day. Apple recently revealed multiple accessibility features including Eye Tracking, Vocal Shortcuts, and Vehicle Motion Cues for its hardware; however, they aren't arriving until later this year. It's unknown exactly when they'll come out, but they'll most likely be made available as a part of iOS 18, VisionOS 2, “and the next version of macOS.”

While we have you check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Don’t worry, Google Gemini’s new bank-scam detection for phone calls isn’t as creepy as it sounds

Google IO brought forth a huge array of new ideas and features from Google, including some useful life hacks for the lazy, and AI was present almost everywhere you looked. One of the more intriguing new features can detect scam calls as they happen, and warn you not to hand over your bank details – and it’s not as creepy as it might at first sound.

The feature works like this: if you get a call from a scammer pretending to be a representative of your bank, Google Gemini uses its AI smarts to work out that the impersonator is not who they claim to be. The AI then sends you an instant alert warning you not to hand over any bank details or move any money, and suggests that you hang up.

Involving AI in the process raises a few pertinent questions, such as whether your bank details ever get sent to Google’s servers for processing as they're detected. That’s something you never want to happen, because you don’t know who can access your bank information or what they might do with it.

Fortunately, Google has confirmed (via MSPoweruser) that the whole process takes place on your device. That means there’s no way for Google (or anyone else) to lift your banking information off a server and use it for their own ends. Instead, it stays safely sequestered away from prying eyes.

A data privacy minefield

A silhouette of a woman holding a smartphone with the Google Gemini logo in the background

(Image credit: Shutterstock)

Google’s approach cuts to the core of many concerns about the growing role AI is playing in our lives. Because most AI services need huge server banks to process and understand all the data that comes their way, that means you can end up sending a lot of sensitive data to an opaque destination, with no way of really knowing what happens to it.

This has had real-world consequences. In March 2024, AI service Cutout.Pro was hacked and lost the personal data of 20 million users, and it’s far from the only example. Many firms are worried that employees may inadvertently upload private company data when using AI tools, which then gets fed into the AI as training data – potentially allowing it to fall into the hands of users outside the business. In fact, exactly that has already happened to Samsung and numerous other companies.

This all goes to show the importance of keeping private data away from AI servers as much as possible – and, given the potential for AI to become a data privacy minefield, Google’s decision to keep your bank details on-device is a good one.

You might also like

TechRadar – All the latest technology news

Read More

Google Search is getting a massive upgrade – including letting you search with video

Google I/O 2024's entire two-hour keynote was devoted to Gemini. Not a peep was uttered for the recently launched Pixel 8a or what Android 15 is bringing upon release. The only times a smartphone or Android was mentioned is how they are being improved by Gemini

The tech giant is clearly going all-in on the AI, so much so that the stream concludes by boldly displaying the words “Welcome to the Gemini era”. 

Among all the updates that were presented at the event, Google Search is slated to gain some of the more impressive changes. You could even argue that the search engine will see one of the most impactful upgrades in 2024 that it’s ever received in its 25 years as a major tech platform. Gemini gives Google Search a huge performance boost, and we can’t help but feel excited about it.

Below is a quick rundown of all the new features Google Search will receive this year.

1. AI Overviews

Google IO 2024

(Image credit: Google)

The biggest upgrade coming to the search engine is AI Overviews which appears to be the launch version of SGE (Search Generative Experience). It provides detailed, AI-generated answers to inquiries. Responses come complete with contextually relevant text as well as links to sources and suggestions for follow-up questions.

Starting today, AI Overviews is leaving Google Labs and rolling out to everyone in the United States as a fully-fledged feature. For anyone who used the SGE, it appears to be identical. 

Response layouts are the same and they’ll have product links too. Google has presumably worked out all the kinks so it performs optimally. Although when it comes to generative AI, there is still the chance it could hallucinate.

There are plans to expand AI Overviews to more countries with the goal of reaching over a billion people by the end of 2024. Google noted the expansion is happening “soon,” but an exact date was not given.

2. Video Search

Google IO 2024

(Image credit: Google)

AI Overviews is bringing more to Google Search than just detailed results. One of the new features allows users to upload videos to the engine alongside a text inquiry. At I/O 2024, the presenter gave the example of purchasing a record player with faulty parts. 

You can upload a clip and ask the AI what's wrong with your player, and it’ll provide a detailed answer mentioning the exact part that needs to be replaced, plus instructions on how to fix the problem. You might need a new tone arm or a cueing lever, but you won't need to type in a question to Google to get an answer. Instead you can speak directly into the video and send it off.

Searching With Video will launch for “Search Labs users in English in the US,” soon with plans for further expansion into additional regions over time. 

3. Smarter AI

Google IO 2024

(Image credit: Google)

Next, Google is introducing several performance boosts; however, none of them are available at the moment. They’ll be rolling out soon to the Search Labs program exclusively to people in the United States and in English. 

First, you'll be able to click one of two buttons at the top to simplify an AI Overview response or ask for more details. You can also choose to return to the original answer at any time.

Second, AI Overviews will be able to understand complex questions better than before. Users won’t have to ask the search engine multiple short questions. Instead, you can enter one long inquiry – for example, a user can ask it to find a specific yoga studio with introductory packages nearby.

Lastly, Google Search can create “plans” for you. This can be either a three-day meal plan that’s easy to prepare or a vacation itinerary for your next trip. It’ll provide links to the recipes plus the option to replace dishes you don't like. Later down the line, the planning tool will encompass other topics like movies, music, and hotels.

All about Gemini

That’s pretty much all of the changes coming to Google Search in a nutshell. If you’re interested in trying these out and you live in the United States, head over to the Search Labs website, sign up for the program, and give the experimental AI features a go. You’ll find them near the top of the page.

Google I/O 2024 dropped a ton of information on the tech giant’s upcoming AI endeavors. Project Astra, in particular, looked very interesting, as it can identify objects, code on a monitor, and even pinpoint the city you’re in just by looking outside a window. 

Ask Photos was pretty cool, too, if a little freaky. It’s an upcoming Google Photos tool capable of finding specific images in your account much faster than before and “handle more in-depth queries” with startling accuracy.

If you want a full breakdown, check out TechRadar's list of the seven biggest AI announcements from Google I/O 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Workspace is getting a talkative tool to help you collaborate better – meet your new colleague, AI Teammate

If your workplace uses Google Workspace productivity suite of apps, then you might soon get a new teammate – an AI Teammate that is. 

In its mission to improve our real-life collaboration, Google has created a tool to pool shared documents, conversations, comments, chats, emails, and more into a singular virtual generative AI chatbot: the AI Teammate. 

Powered by Google's own Gemini generative AI model, AI Teammate is designed to help you concentrate more on your role within your organization and leave the tracking and tackling of collective assignments and tasks to the AI tool.

This virtual colleague will have its own identity, its own Workspace account, and a specifically defined role and objective to fulfil.

When AI Teammate is set up, it can be given a custom name, as well as have other modifications, including its job role, a description of how it's expected to help your team, and specific tasks it's supposed to carry out.

In a demonstration of an example AI Teammate at I/O 2024, Google showed a virtual teammate named 'Chip' who had access to a group chat of those involved in presenting the I/O 2024 demo. The presenter, Tony Vincent, explained that Chip was privy to a multitude of chat rooms that had been set up as part of preparing for the big event. 

Vincent then asks Chip if I/O storyboards had been approved – the type of question you'd possibly ask colleagues –  and Chip was able to answer as it can analyze all of these conversations that it had been keyed into. 

As AI Teammate is added to more threads, files, chats, emails, and other shared items, it builds a collective memory of the work shared in your organization. 

Google Workspace

(Image credit: Google)

In a second example, Vincent shows another chatroom for an upcoming product release and asks the room if the team is on track for the product's launch. In response, AI Teammate searches through everything it has access to like Drive, chat messages, and Gmail, and synthesizes all of the relevant information it finds to form its response. 

When it's ready (which looks like about a second or slightly less), AI Teammate delivers a digestible summary of its findings. It flagged up a potential issue to make the team aware, and then gave a timeline summary, showing the stages of the product's development. 

As the demo is taking place in a group space, Vincent stated that anyone can follow along and jump in at any point, for example asking a question about the summary or for AI Teammate to transfer its findings into a Doc file, which it does as soon as the Doc file is ready. 

AI Teammate becomes as useful as it's customized to be and Google promises that it can make your collaborative work seamless, being integrated into Google's host of existing products that many of us are already used to.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google reveals new video-generation AI tool, Veo, which it claims is the ‘most capable’ yet – and even Donald Glover loves it

Google has unveiled its latest video-generation AI tool, named Veo, at its Google I/O live event. Veo is described as offering “improved consistency, quality, and output resolution” compared to previous models.

Generating video content with AI is nothing new; tools like Synthesia, Colossyan, and Lumiere have been around for a little while now, riding the wave of generative AI's current popularity. Veo is only the latest offering, but it promises to deliver a more advanced video-generation experience than ever before.

Google IO 2024

Donald Glover invited Google to his creative studio at Gilga Farm, California, to make a short film together. (Image credit: Google)

To showcase Veo, Google recruited a gang of software engineers and film creatives, led by actor, musician, writer, and director Donald Glover (of Community and Atlanta fame) to produce a short film together. The film wasn't actually shown at I/O, but Google promises that it's “coming soon”.

As someone who is simultaneously dubious of generative AI in the arts and also a big fan of Glover's work (Awaken, My Love! is in my personal top five albums of all time), I'm cautiously excited to see it.

Eye spy

Glover praises Veo's capabilities on the basis of speed: this isn't a deletion of human ideas, but rather a tool that can be utilized by creatives to “make mistakes faster”, as Glover puts it.

The flexibility of Veo's prompt reading is a key point here. It's capable of understanding prompts in text, image, or video format, paying attention to important details like cinematic style, camera positioning (for example, a birds-eye-view shot or fast-tracking shot), time elapsed on camera, and lighting types. It also has an improved capability to accurately and consistently render objects and how they interact with their surroundings.

Google DeepMind CEO Demis Hassabis demonstrated this with a clip of a car speeding through a dystopian cyberpunk city.

Google IO 2024

The more detail you provide in your prompt material, the better the output becomes. (Image credit: Google)

It can also be used for things like storyboarding and editing, potentially augmenting the work of existing filmmakers. While working with Glover, Google DeepMind research scientist Kory Mathewson explains how Veo allows creatives to “visualize things on a timescale that's ten or a hundred times faster than before”, accelerating the creative process by using generative AI for planning purposes.

Veo will be debuting as part of a new experimental tool called VideoFX, which will be available soon for beta testers in Google Labs.

TechRadar – All the latest technology news

Read More