Best Buy is giving its customer assistance an AI boost – but with a human touch

Best Buy is taking the plunge and incorporating AI-powered shopping tools for its customers, announcing today on its website that it’s partnered with Google Cloud and the consulting firm Accenture to bring users AI-powered customer assistance. The retailer claims that this move will enable it to give customers “even more personalized, best-in-class tech support experiences.”

Customers can expect a self-service support option when they visit and shop on BestBuy.com, when using Best Buy’s app, or when they call Best Buy’s customer support line (presumably through a conventional automated selection system). When customers make use of one of these, they’ll be able to interact with Best Buy’s new AI-powered virtual assistant, which it expects to debut in late summer 2024. 

These new customer support tools are part of Best Buy’s efforts to offer customers the most tech-forward ways of getting the assistance they need, expanding that it’s making use of Google Cloud’s AI capabilities, including Vertex AI (a Google Cloud machine learning platform), and Google’s new Gemini generative AI models

Inside of a Best Buy, an every day scene at the customer service section with people milling around

(Image credit: Shutterstock/Icatnews)

What the generative AI will help Best Buy do

The retailer explains that the virtual assistant will enable customers to troubleshoot product issues easily, manage their order deliveries and scheduling (including the ability to make changes), manage subscriptions they have from Best Buy such as software and Geek Squad, and navigate their My Best Buy memberships (Best Buy’s customer loyalty program). 

Many people, myself included, find it very frustrating when trying to interact with automated customer service tools, and thankfully it looks like Best Buy is at least somewhat aware of this. It writes: “We also know that sometimes customers prefer speaking with an actual person to get the support they need.”

It follows this up by explaining that Best Buy customer care agents will be equipped with a suite of tools aided by generative AI to assist agents when they’re dealing with customers over the phone. Best Buy details that these tools are designed to help agents assess real-time conversations with customers, and suggest recommendations that might be useful in the moment. The tools will also summarize conversations, collecting and using information gathered during the call to hopefully reduce the chances of individual customer service issues being repeated, as well as detecting the sentiment expressed by the customer.

A close up on a woman working at a computer, wearing a headset and smiling

(Image credit: Shutterstock/OPOLJA)

The wider implications of this change

There are legions of AI-powered assistance tools being developed for employees everywhere at this point, with Best Buy also discussing an assistant that makes it easier for employees to find product guides and company resources. The retailer states that its aim in developing tools like these is to be able to help customers more efficiently.

We’ve seen implementations of similar practices by other, smaller retailers, but Best Buy is one of the first companies of this scale to adopt an AI-first approach. While many companies already use automated customer service tools in some form, Best Buy is joining a limited cohort that make such explicit use of AI-assisted customer service technologies. 

I’ve had positive and negative experiences when dealing with automated customer service, and when you’re particularly stressed out, I don’t see the addition of machine learning as much of a consolation. I am glad that employees will also see a boost behind the scenes with additional tools to help them help customers, and I’m glad that it sounds like customers will still be able to speak to an actual person – I just hope it’s not too difficult to get through to a human and it’ll be open to feedback about its new strategy. 

My gut reaction is that this is a bold move that could be met unenthusiastically by customers, but I appreciate that Best Buy is being forthright about it. If it works, we could see it spread to more retailers big and small, and generative-AI-aided assistance might be well on its way to becoming the industry norm. If not, hopefully, retailers will be wise enough to listen to customer sentiment and understand that there are still some jobs that you simply need a human for.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Samsung Galaxy Ring could help cook up AI-powered meal plans to boost your diet

As we get closer to the full launch of the Samsung Galaxy Ring, we're slowly learning more about its many talents – and some fresh rumors suggest these could include planning meals to improve your diet.

According to the Korean site Chosun Biz (via GSMArena), Samsung plans to integrate the Galaxy Ring with its new Samsung Food app, launched in August 2023

Samsung calls this app an “AI-powered food and recipe platform”, as it can whip up tailored meal plans and even give you step-by-step guides to making specific dishes. The exact integration with the Galaxy Ring isn't clear, but according to the Korean site, the wearable will help make dietary suggestions based on your calorie consumption and body mass index (BMI).

The ultimate aim is apparently to integrate this system with smart appliances (made by Samsung, of course) like refrigerators and ovens. While they aren't yet widely available, appliances like Samsung Bespoke 4-Door Flex Refrigerator and Bespoke AI Oven include cameras that can design or cook recipes based on your dietary needs.

It sounds like the Galaxy Ring, and presumably smartwatches like the incoming Galaxy Watch 7 series, are the missing links in a system that can monitor your health and feed that info into the Samsung Food app, which you can download now for Android and iOS.

The Ring's role in this process will presumably be more limited than smartwatches, whose screens can help you log meals and more. But the rumors hint at how big Samsung's ambitions are for its long-awaited ring, which will be a strong new challenger in our best smart rings guide when it lands (most likely in July).

Hungry for data

A phone on a grey background showing the Samsung Food app

(Image credit: Samsung)

During our early hands-on with the Galaxy Ring, it was clear that Samsung is mostly focusing on its sleep-tracking potential. It goes beyond Samsung's smartwatches here, offering unique insights including night movement, resting heart rate during sleep, and sleep latency (the time it takes to fall asleep).

But Samsung has also talked up the Galaxy Ring's broader health potential more recently. It'll apparently be able to generate a My Vitality Score in Samsung's Health app (by crunching together data like your activity and heart rate) and eventually integrate with appliances like smart fridges.

This means it's no surprise to hear that the Galaxy Ring could also play nice with the Samsung Food app. That said, the ring's hardware limitations mean this will likely be a minor feature initially, as its tracking is more focused on sleep and exercise. 

We're actually more excited about the Ring's potential to control our smart home than integrate with appliances like smart ovens, but more features are never a bad thing – as long as you're happy to give up significant amounts of health data to Samsung.

You might also like

TechRadar – All the latest technology news

Read More

Windows 10 gets security boost and bug fixes in Microsoft’s first big update of 2024

Microsoft might be pushing forward with integrating AI into as many aspects of Windows 11 as possible, but it’s not totally forgotten about Windows 10 users. The older version of Windows continues to be very popular among Windows’ user base, and fortunately for them, Microsoft has just released update KB5034122 for Windows 10 that brings an array of bug fixes and serious security upgrades. 

Two of the bugs that the update addresses are to do with smart card usage and an issue with scroll bars. Maybe not the most thrilling updates, but this is pretty in line with Microsoft’s messaging about Windows 10. 

According to the tech titan, it’s more or less closed up shop when it comes to working on significant new features for Windows 10 and users shouldn’t expect to see any major changes in the future. Update KB5034122 serves as evidence of this with it being mostly maintenance and fixes from Microsoft, but let’s not forget that Microsoft’s shiny new all-in-one AI assistant, Windows Copilot, was made available to Windows 10 users last year. We’ll have to see if Copilot will see upgrades and improvements in Windows 10 considering that its current functionality is fairly limited.

Microsoft Teams copilot

(Image credit: Microsoft Teams)

What's new in update KB5034122

This update tackles security issues, as well as a quality upgrade to Windows 10’s servicing stack, the Windows component that enables users to install Windows updates. Microsoft also gives more details about the bug fixes that are included in this update: 

You can find a full rundown of what this update addresses on Microsoft’s Support blog, and it does make note of some known issues that still exist in this version of Windows 10 and gives suggested workarounds with instructions. It follows up each workaround for each presently-existing problem with the following statement to reassure Windows 10 users: 

We are working on a resolution and will provide an update in an upcoming release.

KB5034122 should be prompted for install on Windows 10 devices automatically because it’s a security update, but if for whatever reason your Windows 10 device has not downloaded it already, you can download it manually. You should definitely do this as it’s important to have the most up to date security fixes no matter what Windows version you use, and you can get it from the Microsoft Update Catalog

Good for Microsoft for keeping an eye on Windows 10 and recognizing that it remains a fan favorite. However, it’s clearly determined to get as much use out of its investment and collaboration with OpenAI, utilising GPT technology however it can.

Recently, Windows watchers have spotted that Notepad is getting a ChatGPT-powered writing assistant and text editing AI tool, with some users expressing that they’d rather Notepad stayed the simple, straightforward app that it came to be known as. Perhaps as Microsoft goes down the path of ramping up AI integration, Windows 10 will be a refuge option for those that want their operating system and apps to be a little less intelligent. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft just gave Windows Copilot a ChaGPT-4 boost and the ability to explain screenshots

Microsoft came out hot with its Windows Copilot 365 and Copilot for Windows announcements last year, and presented Copilot as a general virtual assistant to help with your whole digital life. So far, we’re yet to see Copilot reach its potential, but it looks like we’re one step closer: Microsoft is reportedly gearing up to add a powerful new feature that will allow users to take a screenshot, submit it to Copilot, and ask Copilot to explain what’s in the screenshot. 

As far as we know, an “add a screenshot” button is rolling out to the general public – meaning you may already be able to try it. If not, you should be able to very soon. This button should appear in the Copilot panel, prompting you to select a part of the screen, confirming that you’ve captured what you want to discuss with Copilot, and then upload it to the Copilot or Bing right-side panel. When I tried it, I did also need to submit some text to go along with it, such as a question or additional context. 

A screenshot in a Microsoft Edge window with a Copilot panel open on the right, with arrow pointing to new

(Image credit: Future)

Once the screenshot is uploaded, you can talk about it and ask about anything within it or relating to it with Bing Chat or Copilot on Windows 11

As you can see in the screenshot, the new button sits next to the existing image upload button. You can  try the new feature at Microsoft’s Copilot website, or over at Bing Chat

Windows Latest has their own demonstration of this new feature, and I had a go of my own. First, I took a screenshot selection which showed a full description of video creation platform HeyGen's YouTube channel and asked it to tell me two things: whose channel it is and what it’s about.

Copilot returned this: 

This is the YouTube channel of HeyGen. It is a next-gen video creation platform that turns text into professional spokesperson videos in minutes. They offer premium avatars speaking in multiple languages and professional video templates for various use cases including marketing, e-learning, and corporate communication. You can find more information about HeyGen on their website.

This is pretty accurate, and reminded me of one feature in particular that I really like about Bing Chat and Copilot – they readily and very visibly provide sources and websites that you can visit to double check the information. 

Once you make a selection of your screen, you can make markings on it and draw on it. You can also add specific instructional visuals to help Copilot understand your query, and you can move your selection window around to a different part of the screen altogether. 

According to Windows Latest, Bing Chat recently got a ChatGPT-4 boost granting it a new level of functionality and this is likely making its way into Copilot as well. Apparently access is currently only granted to select users, and this development enables Copilot to engage in conversations about emotions. Currently, there is a limited pool of users who can try this for themselves and access is seemingly given at random, and it will be available to all who access Windows Copilot and Bing Chat very shortly.

Microsoft Bing logo on a white smartphone screen

(Image credit: Shutterstock / Primakov)

Microsoft charts a course ahead with Copilot

Microsoft has been pretty definitive in its messaging that Copilot is a big deal for the company, and will be a central feature in several products like Microsoft 365 and Windows, but not just those. 

In a pretty major (yet not terribly surprising) development, Microsoft is planning to add an actual physical Copilot button into the hardware of newly manufactured products as early as 2024. Microsoft is doing this in its continuing effort to make computing, especially AI-powered computing, simpler and more seamless for users. This was detailed and confirmed in a recent Windows Experience Blog post written by Yusuf Mehdi, Executive Vice President and Consumer Chief Marketing Officer at Microsoft.

For the rest of us not ready to throw our older Windows devices out quite yet for this new button, you can bring up Windows Copilot with the shortcut Win+C (if you have updated your Windows 11 version to one that has Windows Copilot included). 

According to Microsoft itself, the introduction of the Copilot key will be the most notable upgrade to the Windows keyboard in almost thirty years. It likens this future introduction to the addition of the Windows Start key, which is putting a lot of faith in Copilot itself so I imagine we’ll continue to see major developments to Copilot throughout this year. I think especially with Copilot’s development, Microsoft is one of the most exciting companies to watch this year. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Facebook Messenger gets its biggest ever update – including a major privacy boost

Big changes are coming to Facebook Messenger, covering everything from photo and video sharing to user privacy. The changes are rolling out from today, although it may take some time for everyone's account to be updated.

Perhaps the biggest upgrade is the switch to end-to-end encryption as the default option for conversations – this had previously been available as an option in individual chats, but will now be automatically applied to all conversations and audio and video calls.

As on other similarly secured messaging apps like WhatsApp, end-to-end encryption means only you and the person or people you're chatting to can see the conversations – so no one else can intercept or unlock your communications, including staff at Meta, malicious actors, and law enforcement agencies.

The existing disappearing messages feature is getting tweaked, too: all messages now vanish after 24 hours (previously you could customize this), and Meta is making it easier for users to see when disappearing messages are enabled. You'll be alerted if anyone tries to take a screenshot of a disappearing message, too.

More upgrades

Image 1 of 2

Message editing in the Facebook Messenger app

Message editing is coming to the Messenger app (Image credit: Meta)
Image 2 of 2

Photo layouts in the Facebook Messenger app

Get ready for new photo and video layouts (Image credit: Meta)

In addition, Messenger is now joining Apple's iMessage in letting you edit messages after you've sent them. You get a 15-minute window after a message has been sent to revise it, if you've made a glaring typo or want to change the tone of your latest communication.

Another change is that read receipts can now be switched off, if you don't want other people knowing when you've seen their messages. As is the case with other messaging apps, there's a trade-off: you won't be able to see read receipts from other people either.

Photo and videos will now be shared at an “upgraded” quality, Meta says – so expect files that are less compressed when you share them around. Photos and videos will be easier to access in the Messenger interface, with some “fun” layouts applied when you share them in batches, and instant reactions to photos and videos are being added too.

Lastly, voice messages are going to get controls for variable speed playback, and the app will now remember where you left off in a voice message if you come back to it later. Voice messages will also continue to play if you navigate away from the chat or the app.

All in all, it's a big range of upgrades that'll be welcome for regular Messenger users, even if it might not convince others to switch from WhatsApp or iMessage.

You might also like

TechRadar – All the latest technology news

Read More

Google Chrome gets new 4 mobile features to boost your search game

A Google Chrome update is revamping the way you search on mobile so you can find the information you’re looking for quicker than before. In total, four new features are being introduced.

Starting from the top, Chrome will now show relevant search suggestions whenever you tap the address bar on certain websites. The example given by Google is to imagine yourself “reading an article about Japan as you plan for an upcoming trip.” Upon tapping the URL of said article, a section called Related To This Page will appear below giving “suggestions for other searches” from local tourist attractions to restaurants. This feature will be available on both iOS and Android.

System exclusive

What won’t be coming to iOS (at least initially) is a list displaying all of the trending Google searches for a day. You’ll be able to see the list by tapping the address bar on a freshly opened tab. The company says this will hit Android phones first. Later on in this year, Chrome on iOS will get the same thing although an exact date wasn’t given.

Third in the Chrome update is the seemingly exclusive upgrade to Touch to Search on Android. Moving forward, whenever you highlight text on a website, a carousel of related topics will appear at the bottom of the page so you can quickly learn about the topic at hand. There is a chance you won’t be able to see the carousel as Touch to Search may be deactivated. Detailed instructions on how to activate the tool can be found on the Chrome Help website

And finally, “typing in the Chrome address bar” on the iOS app will now display 10 suggestions instead of six. The Android app has had this feature for a while now. This is just Google updating the iPhone version so it’s on par.

Potential desktop changes

The company says all four updates are currently making their way to all users so keep an eye out for the patch when it arrives. 

As for Chrome on desktop, officially there’s nothing officially new. However, a report from TheVerge reveals the download tray on the web browser is in fact seeing some changes. There is a ring animation that will now appear displaying the progress of a download. Plus the tray will list every file “you downloaded within the previous 24 hours” alongside options to pause, resume, retry, or cancel the download. 

It’s unknown when the desktop changes will be released. As we said, Google hasn’t said a word about it. We asked the company for more information regarding the download tray upgrade as well as clarification on some of the mobile features. We wanted to know if it plans on extending the Touch to Search carousel to iOS among other things. This story will be updated at a later time.

TechRadar – All the latest technology news

Read More

Android’s Nearby Share boost means it’s almost a match for Apple’s AirDrop

Nearby Share on Android has received a major upgrade, giving you the ability to send entire folders to other devices.

This feature was recently discovered by industry insider and tech journalist Mishaal Rahman who shared his findings on X (or Twitter, if you prefer the older, less obtuse name). Rahman states you’re able to transfer folders from one Android phone to another as well as to Chromebooks and Windows PCs via the Files by Google app. He says that all you have to do is long-press any folder within Google Files and then select the Nearby Share icon on-screen. From there, you will see all of the connected devices which can accept the transfer. Pretty simple stuff.

See more

There are some limitations to be aware of. Tom’s Guide states in their report, “Nearby Share has a 1,000-file limit”, so folders can’t be too big. Another piece from Android Police reveals the upgrade is exclusive to Google Files as it doesn’t seem to work properly with Samsung’s own file manager. Files will still be shared on Samsung's app, but it won’t retain the folder structure, according to Rahman.

What’s interesting is there’s a good chance you already have this feature if your device has Google Files. Rahman says that Nail Sadykov, another notable industry insider, claims “the earliest he saw someone mention it was back in May” of this year. It’s just that no one knew about it until very recently. Apparently, Google didn’t give anyone the heads-up.

So, if you have Google Files on your phone and haven’t updated it in a while, we recommend downloading the patch to get the boosted Nearby Share.

Closing the gap

Admittedly, it’s a small update, but an important one as it allows Nearby Share to close the gap a bit between it and Apple’s AirDrop. Android users will save a lot of time since they won’t be forced to transfer files one by one. It’s a function iPhone owners have enjoyed for many years now. It’s hard to say exactly when AirDrop first gained the ability to send folders to Macs. The oldest instance we could find was one of our How-to guides from 2015.

However, Nearby Share still has a long way to go before it can be considered a proper rival to AirDrop. For iOS 17, Apple plans on further enhancing its wireless file transfer tool by introducing new features like Contact Posters for friends plus improved security for unsolicited images.

If you’re looking for other management options besides Google Files, be sure to check out TechRadar’s list of the best file transfer software for 2023

TechRadar – All the latest technology news

Read More

Google Assistant gets AI boost – but will it make it smarter?

The AI chatbot race is far from over, despite ChatGPT’s current dominance, and Google is not showing any signs of letting up. In fact, reports suggest Google is preparing to “supercharge” Assistant, its virtual personal assistant, by integrating generative AI features similar to the ones found in OpenAI’s ChatGPT and Google’s own generative AI chatbot Bard

Google has begun development on a new version of Google Assistant for mobile (check out the full list of devices that will be able to run , as stated in an internal email circulated to employees as reported by Axios. This is allegedly going to take place through a reorganization of its present Assistant team which will see a reduction of “a small number of roles”.

The exact number of employees that are expected to be let go has not been specified, though Axios has claimed that Google has already laid off “dozens” of employees. We have contacted Google to find out more.

Google Assistant

(Image credit: Google)

The newer, shinier, and AI-connected Google Assistant

As reported by The Verge, Google is looking to capitalize on the momentum of the rapid development of large language models (LLMs) like ChatGPT to  “supercharge Assistant and make it even better,” according to Google spokesperson Jennifer Rodstrom. 

Google is placing a big bet on this Google Assistant gambit, being “deeply committed to Assistant” and its role in the future, according to Peeyush Ranjan, Google Assistant’s vice president, and Duke Dukellis, Google product director, in the email obtained by Axios.

This step in Google’s AI efforts follows Bard’s recent big update which enabled it to respond to user queries by “talking” (presumably meaning that it will reply using a generated voice, much like Google Assitant does), visual prompts, opening up Bard to more countries, and the introduction of over 40 languages. 

Google has not yet revealed what particular features it’s focusing on for Assistant, but there are plenty of ways it could improve its virtual assistant such as being able to respond in a more human-like manner using chatbot-like tech.

Making sure customer data remains safe and protected

Google Assistant is already in many people’s homes thanks to it being included in many devices such as Android smartphones and Google Nest smart speakers (find out how the Google Nest currently compares here) , so Google has an extensive number of users to test with. “We’re committed to giving them high quality experiences,” Rodstrom told the Verge. 

Of course, this does raise concerns about the privacy and security of its customers, as Google is likely to try and implement changes of this type to its smart home products, and some people may not be comfortable with giving the search giant even more access to their private lives. 

There is also a major concern (which, to be fair, also applies to other chatbots such as ChatGPT); accuracy of information.

google home

(Image credit: Google)

Tackling the issue of bad information and final thoughts

Google could tackle accuracy and misinformation concerns by making the generative AI being developed for Google Assistant devices linked to Google Search, as Bard is not intended to serve as an information source.

In a recent interview, the Google UK executive Debbie Weinstein emphasized that users should double-check the information provided by Bard using Google Search (as reported on by The Indian Express). 

If we’re talking hands-free Assistant devices, I assume that there is development happening to add mechanisms of this sort. Otherwise, users have to carry out a whole interrogation routine with their Assistant devices which could interrupt the flow of using the device quickly and intuitively.

It’s an enticing idea – the home assistant that can fold your laundry and tell you bedtime stories, and steps like these feel like pushes in that direction. It all comes at a cost, and the more tech saturates out lives, the more we expose to those who wish to use it for ill-intentioned purposes. 

This is going to be a huge issue for many people, and it should be, and Google should make just as much of an effort to secure its users data as it does doing magic tricks with it. That said, many Google device users and Android users will be looking forward to a more intelligent Google Assistant, as many report that they don’t get much sense from it at the moment. We’ll see if Google can deliver on its proposed steps (hopefully) forward.

Hopefully, these upgrades to both Bard and Google Assistant will make them, well, more intelligent. Putting security and privacy aside (only for a brief moment), this has real potential to make users' home devices, like Nest devices, more advanced in their ability to react to your questions and requests with relevant information and tailor responses using your personal information (responsibly, we hope).

TechRadar – All the latest technology news

Read More

Your Oculus Quest 2 just got better hand-tracking to boost your virtual boxing skills

Meta has released the v56 update for the Meta Quest Pro and the Oculus Quest 2, which introduces a bunch of upgrades for two of the best VR headsets out there.

With the new Quest update Meta is rolling out Hand Tracking 2.2, which says aims to bring responsiveness more in line with what users experience with controllers. According to Meta, Hand Tracking 2.2 will reduce the latency experienced by a typical user by 40%, with a 75% latency reduction for fast hand movements. 

Meta recommends that you download the Move Fast demo app from App Lab to get a feel for what these improvements mean in practise. It looks like a simple fitness trainer in which you have to punch, chop and block incoming blocks while looking out over a lake decorated with cherry blossom trees. Meta has said we can expect more hand-tracking improvements when the Meta Quest 3 launches later this year. It's yet to be seen if these upgrades can keep up with the hand-tracking Apple is expected to launch with its Apple Vision Pro headset.

Another important improvement is coming just for Meta Quest Pro owners. One of the premium headset’s best upgrades over the Quest 2 is its display, which offers local dimming. This allows screens to achieve deeper black levels and improved contrast, something which can help a lot with immersion, as dark spaces actually look dark without it being impossible to see. However, local dimming isn’t available in every app, so with v56 Meta is launching a Local dimming Experimental Setting (which can be found in the Experimental menu in your headset’s Settings).

The feature is off by default, but if you turn it on you should see the benefits of local dimming in a load more titles – that is, unless a developer chooses to opt out. Just note that as with other experimental settings, you may find it isn’t quite perfect or causes some problems.

Quest 2 users aren't missing out on visual upgrades entirely though, as Meta recently announced that a Quest Super Resolution upscaling tool is coming to help developers make their games look and run better.

This month Meta is also improving the accessibility of Quest titles by introducing button mapping and live captions. Live captions will appear in your Quest headset’s Settings, under the Hearings section of the Accessibility menu. Once turned on you’ll see live subtitles while using the Meta Quest TV app, Explore, and the in-headset Quest Store. In the same Accessibility menu, go to the Mobility section and you’ll find an option to remap your Quest controllers – you can swap any buttons you want on the handsets to create a completely custom layout.

These accessibility settings won’t revolutionize your headset overnight, but they’re a great first step. Hopefully, we’ll see Meta introduce captioning to more apps and services, and perhaps it’ll launch custom-accessible controllers like the ones that Sony and Microsoft offer for their PS5 Access controller and the Xbox Adaptive Controller.

New ways to stay connected 

Beyond these major upgrades, Meta is rolling out a handful of smaller improvements as part of update v56.

First, when you leave your headset charging on standby between play sessions it can smartly wake up and install updates whenever it detects that your installed software is out of date. This should help to reduce instances of you going to play a game only to find that you need to wait for ages while your headset installs a patch.

Second is the new Chats and Parties feature. Whenever you start a call in VR a chat thread is also connected with all of the call members, so you can keep in contact later; you can also now start a call from a chat thread (whether it’s a one-on-one chat or a group chat).

Third, and finally, meta is making it easier to stream your VR gameplay to Facebook, and while you play you’ll be able to see a live chat, so you can keep in contact with your viewers. While the platform isn’t many people’s first choice, it hopefully opens the door for easier real-time live streaming to more popular platforms like YouTube and Twitch.

TechRadar – All the latest technology news

Read More

Google Bard just got a super-useful Google Lens boost – here’s how to use it

Google Bard is getting update after update as of late, with the newest one being the incorporation of Google Lens – which will allow users to upload images alongside prompts to give Bard additional context.

Google seems to be making quite a point of expanding Bard’s capabilities and giving the chatbot a serious push into the artificial intelligence arena, either by integrating it into other Google products and services or simply improving the standalone chatbot itself.

This latest integration brings Google Lens into the picture, allowing you to upload images to part, identify objects and scenes, provide image descriptions, and search the web for pictures of what you might be looking for.

Image 1 of 2

Screenshot of Bard

(Image credit: Future)
Image 2 of 2

Asking Google Bard to show me a kitten

(Image credit: Future)

For example, I asked Bard to show me a photo of a kitten using a scratching post, and it pulled up a photo (accurately cited!) of exactly what I asked for, with a little bit of extra information on why and how cats use scratching posts. I also showed Bard a photo from my phone gallery, and it accurately described the scene and some tidbits of interesting information about rainbows.

Depending on what you ask Bard to do with the image provided, Bard can provide a variety of helpful responses. Since the AI-powered chatbot is mostly a conversational tool, adding as much context as you possibly can will consistently get you the best results, and you can refine its responses with additional prompts as needed. 

If you want to give Bard's new capabilities a try, just head over to the chatbot, click the little icon on the left side of the text box where you would normally type out your prompt, and add any photo you desire to your conversation. 

Including the image update, you can now pin conversation threads, get Bard to read responses out loud in over 40 languages, and get access to easier sharing methods. You can check out the Bard update page for a more detailed explanation of all the new additions.

TechRadar – All the latest technology news

Read More