Logitech has built an AI sidekick tool that it hopes will help you work smarter, not harder, with ChatGPT

In a move that shows how mainstream artificial intelligence (AI) is these days, Logitech has launched its free Logi AI Prompt Builder software tool that isn’t yet another AI chatbot, but instead designed to help Logitech users get the most out of an existing chatbot, ChatGPT

Logitech is also working on the hardware side of making AI-specific peripherals, launching a wireless mouse that’s equipped with an AI prompt button: the Logitech Signature AI Edition Mouse.

Who can access the Logi AI Prompt App and where

The Logi AI Prompt Builder can be accessed via the existing Logi Options+ app. This is freely available to anyone using a Logitech keyboard or mouse that’s supported by the English version of the Logi Options+ app, which includes Logitech MX, Ergo, Signature, and Studio Series devices.

Logitech has set up a site detailing the new AI tool, and you can click ‘Download Now’ to get the Logi Options+ app. Once you download and install this, you can designate a keyboard shortcut that you’d like to use to quickly open up the Logi AI Prompt Builder. Then, users can open it through the Logi Options + app or by using their keyboard shortcut, enabling them to receive recommendations about the text they've selected to converse with ChatGPT about.

A close up shot of Logitech's new AI-specific mouse, set on a table and the various parts labelled

(Image credit: Logitech)

Logi AI Prompt Builder will then offer you suggestions for commonly-used ChatGPT prompts, such as ‘Rephrase’ and ‘Summarise.’ You can also customize your queries within the tool, and ask it to make suggestions that take into account the sort of tone, style, complexity, and length of answer that you’d like. The latter of these is also offered by other generative AI tools like Microsoft’s own digital AI assistant, Windows Copilot. According to Logitech, this app will make for a smoother and less disruptive workflow, especially for those who make use of AI tools, thanks to you having to make fewer clicks and being able to work faster. You can check out how the tool works for yourself before downloading and installing it by watching a demo that Logitech has put on the Logi AI Prompt Builder site. 

I could see this having use beyond helping people work with ChatGPT, as other generative AI chatbots like Google’s Gemini and Anthropic’s Claude Sonnet might also offer better responses thanks to Logitech’s suggestions. 

Logi AI Prompt Builder is now live and accessible for free for any user with a suitable Logitech device, and is available for both Windows and Mac users via the Logi Options+ app. The dedicated Logitech Signature AI Edition Mouse is currently available exclusively on the Logitech.com website for $ 49.99 in the US and £54.99 in the UK.

A man sitting at a table with a computer and the AI tool on his screen, in a room filled with modern decor

(Image credit: Logitech)

A vote of confidence for generative AI

This launch has piqued my interest greatly because it’s a pretty substantial move from a company that mostly specializes in PC peripherals, which suggests that it’s not just computer manufacturers that are making products that embrace our AI future. It’s also pretty indicative to me of companies like Logitech being convinced of generative AI’s staying power.

It’s one step closer to AI being a normal part of our work and everyday lives, and reminds me of Microsoft’s plans to add a Copilot button in the keyboards of new laptop models. I’m keen to try a tool like this for myself and see if my workflow becomes smoother, because if that’s the case, Logitech, Microsoft, and others could be on to something.

You might also like…

TechRadar – All the latest technology news

Read More

Google Maps is getting a new update that’ll help you discover hidden gems in your area thanks to AI – and I can’t wait to try it out

It looks like Google Maps is getting a cool new feature that’ll make use of generative AI to help you explore your town – grouping different locations to make it easier to find restaurants, specific shops, and cafes. In other words, no more sitting around and mulling over where you want to go today!

Android Authority did an APK teardown (which basically means decompiling binary code within a program into a programming language that can be read normally) which hints at some new features on the horizon. The code within the Google Maps beta included mention of generative AI, which led Android Authority to Google Labs. If you’re unfamiliar with Google Labs, it’s a platform where users can experiment with Google’s current in-development tools and AI projects, like Gemini Chrome extensions and music ‘Time Travel’. 

So, what exactly is this new feature that has me so excited? Say you’re really craving a sweet treat. Instead of going back to your regular stop or simply Googling ‘sweet treats near me’, you’ll be able to ask Google Maps for exactly what you’re looking for and the app will give you suggestions for nearby places that offer it. Naturally, it will also provide you with pictures, ratings, and reviews from other users that you can use to make a decision.

Sweet treat treasure hunter 

I absolutely love the idea and I really hope we get to see the feature come to life as someone who has a habit of going to the same places over and over again because I either don’t know any alternatives or just haven’t discovered other parts of my city. The new feature has the potential to offer a serious upgrade to Google Maps’ more specific location search abilities, beyond simply typing in the name of the shop you want or selecting a vague group like ‘Restaurants’ as you can currently. 

You’ll be able to see your results into categories, and if you want more in-depth recommendations you can ask follow-up questions to narrow down your search – much in the same way that AI assistants like Microsoft Copilot can ‘remember’ your previous chat history to provide more context-sensitive results. I often find myself craving a little cake or a delicious cookie, so if I want that specific treat I can specify to the app what I’m craving and get a personalized list of reviewed recommendations. 

We’re yet to find out when exactly to expect this new feature, and without an official announcement, we can’t be 100% certain that it will ever make a public release. However, I’m sure it would be a very popular addition to Google Maps, and I can’t wait to discover new places in my town with the help of an AI navigator.

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT might get its own dedicated personal device – with Jony Ive’s help

Sam Altman, the CEO of ChatGPT developer OpenAI, is reportedly seeking funding for an AI-powered, personal device – perhaps not unlike the Humane AI Pin – and ex-Apple design guru Jony Ive is apparently getting involved as well.

This is as per The Information (via MacRumors), and the rumor is that Altman and Ive have started a “mysterious company” together to make the device a reality. The report doesn't mention much about the hardware, except to say it won't look like a smartphone.

As we've seen with the Humane AI Pin and the Rabbit R1, having an AI assistant running on a device means you don't necessarily need a display and traditional apps – the artificial intelligence engine can do everything for you, no tapping or scrolling required.

Altman and Ive are said to be seeking around $ 1 billion in funding, so this is clearly a major undertaking we're talking about. It's not clear how much involvement OpenAI would have, but its ChatGPT bot would most likely be used on the new device.

Previous rumors

A close up of ChatGPT on a phone, with the OpenAI logo in the background of the photo

ChatGPT could find itself in a new device (Image credit: Shutterstock/Daniel Chetroni)

This hasn't come completely out of the blue: back in September The Financial Times reported that Altman and Ive were “in talks” to get funding for a new project from SoftBank, a Japanese investment company.

SoftBank has a stake in CPU company Arm, which might be tapped to provide components for the hardware – which can't run entirely on AI cloud magic of course. All this is speculation for the time being, however.

In January, Sam Altman was spotted touring around a Samsung chip factory, so all the indications are that he's planning something in terms of physical hardware. It remains to be seen just how advanced this hardware is though.

During his time with Apple, Jony Ive led the design teams responsible for the iPod, iPhone, iPad and MacBook, so whatever is in the pipeline, we can expect it to look stylish. We can also expect to hear more about this intriguing device in the years ahead.

You might also like

TechRadar – All the latest technology news

Read More

Chrome’s new Declutter tool may soon help manage your 100 plus open tabs

Recent evidence suggests Chrome on Android may receive a new Tab Declutter tool to help people manage so many open tabs. Hints of this feature were discovered in lines of code on Google’s Chromium platform by 9To5Google. It’s unknown exactly how Tab Declutter will work, although there is enough information to paint a picture.

According to the report, tabs that have been unused for a long period of time “will automatically” be put away in an archive. You can then go over to the archive editor, look at what’s there, and decide for yourself whether you want to delete a tab or restore it. 

Not only could Tab Declutter help people manage a messy browser, but it might also boost Chrome’s performance. All those open tabs can eat away at a device's RAM, slowing things down to a crawl.

This isn’t the first time Google has worked on improving tab management for its browser. Back in January, the company implemented an organizer tool harnessing the power of AI to instantly group tabs together based on a certain topic.  

These efforts even go as far back as 2020, when the tech giant began developing a feature that would recommend closing certain tabs if they’ve been left alone for an extended period of time. It was similar to the new Declutter tool, though much less aggressive, since it wouldn’t archive anything. Ultimately, nothing came of it, however it seems Google is looking back at this old idea.  

Speculating on all the open tabs

As 9To5Google points out, this has the potential to “become one of the most annoying features” the company has ever made. Imagine Chrome disappearing tabs you wanted to look at without letting you know. It could get frustrating pretty fast. 

Additionally, would it be possible to set a time limit for when an unused page is allowed to be put away? Will there be an exception list telling Chrome to leave certain websites alone? We'll have the answer if and when this feature eventually goes live.

We have no word on when Tab Declutter will launch. It’s unknown if Chrome on iOS is scheduled to receive a similar upgrade as the Chromium edition. It's possible Android devices will get first dibs, then iPhones, or the iPhone may be left out in some regions that don't get a Chromium-based browser. 

9To5Google speculates the update will launch in early May as part of Chrome 125. This seems a little early if it’s still in the middle of development. Late summer to early autumn is more plausible, but we could be totally wrong. We’ll just have to wait.

Until we get more news, check out TechRadar's roundup of the best Chromebooks for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Samsung Galaxy Ring could help cook up AI-powered meal plans to boost your diet

As we get closer to the full launch of the Samsung Galaxy Ring, we're slowly learning more about its many talents – and some fresh rumors suggest these could include planning meals to improve your diet.

According to the Korean site Chosun Biz (via GSMArena), Samsung plans to integrate the Galaxy Ring with its new Samsung Food app, launched in August 2023

Samsung calls this app an “AI-powered food and recipe platform”, as it can whip up tailored meal plans and even give you step-by-step guides to making specific dishes. The exact integration with the Galaxy Ring isn't clear, but according to the Korean site, the wearable will help make dietary suggestions based on your calorie consumption and body mass index (BMI).

The ultimate aim is apparently to integrate this system with smart appliances (made by Samsung, of course) like refrigerators and ovens. While they aren't yet widely available, appliances like Samsung Bespoke 4-Door Flex Refrigerator and Bespoke AI Oven include cameras that can design or cook recipes based on your dietary needs.

It sounds like the Galaxy Ring, and presumably smartwatches like the incoming Galaxy Watch 7 series, are the missing links in a system that can monitor your health and feed that info into the Samsung Food app, which you can download now for Android and iOS.

The Ring's role in this process will presumably be more limited than smartwatches, whose screens can help you log meals and more. But the rumors hint at how big Samsung's ambitions are for its long-awaited ring, which will be a strong new challenger in our best smart rings guide when it lands (most likely in July).

Hungry for data

A phone on a grey background showing the Samsung Food app

(Image credit: Samsung)

During our early hands-on with the Galaxy Ring, it was clear that Samsung is mostly focusing on its sleep-tracking potential. It goes beyond Samsung's smartwatches here, offering unique insights including night movement, resting heart rate during sleep, and sleep latency (the time it takes to fall asleep).

But Samsung has also talked up the Galaxy Ring's broader health potential more recently. It'll apparently be able to generate a My Vitality Score in Samsung's Health app (by crunching together data like your activity and heart rate) and eventually integrate with appliances like smart fridges.

This means it's no surprise to hear that the Galaxy Ring could also play nice with the Samsung Food app. That said, the ring's hardware limitations mean this will likely be a minor feature initially, as its tracking is more focused on sleep and exercise. 

We're actually more excited about the Ring's potential to control our smart home than integrate with appliances like smart ovens, but more features are never a bad thing – as long as you're happy to give up significant amounts of health data to Samsung.

You might also like

TechRadar – All the latest technology news

Read More

Feeling lost in the concrete jungles of the world? Fear not, Google Maps introduces a new feature to help you find entrances and exits

Picture this: you’re using Google Maps to navigate to a place you’ve never been and time is pressing, but you’ve made it! You’ve found the location, but there’s a problem: you don’t know how to get into whatever building you’re trying to access, and panic sets in. Maybe that’s just me, but if you can relate it looks like we’re getting some good news – Google Maps is testing a feature that shows you exactly where you can enter buildings.

According to Android Police, Google Maps is working on a feature showing users entrance indicator icons for selected buildings. I can immediately see how this could make it easier to find your way in and out of a location. Loading markers like this would require a lot of internet data if done for every suitable building in a given area, especially metropolitan and densely packed areas, but it seems Google has accounted for this; the entrance icons will only become visible when you select a precise location and zoom in closely. 

Google Maps is an immensely popular app for navigation as well as looking up recommendations for various activities, like finding attractions or places to eat. If you’ve ever actually done this in practice, you’ve possibly had a situation like I’ve described above, especially if you’re trying to find your way around a larger attraction or building. Trying to find the correct entrance to an expo center or sports stadium can be a nightmare. Places like these will often have multiple entrances with different accessibility options – such as underground train stations that stretch across several streets.

Google's experimentation should help users manage those parts of their journeys better, starting with only certain users and certain buildings for now, displaying icons that indicate both where you can enter a place and exit it (if there are exit/entrance-only doors, for example). This feature follows the introduction of Google Maps’ recent addition of indicators of the best station exits and entrances for users of public transport.

Google Maps being used to travel across New York

(Image credit: Shutterstock / TY Lim)

The present state of the new feature

Android Police tested the new feature on Google Maps version 11.17.0101 on a Google Pixel 7a. As Google seemingly intended, Google Maps showed entrances for a place only when it was selected and while the user zoomed in on it, showing a white circle with a symbol indicating ‘entry’ on it. That said, Android Police wasn’t able to use the feature on other devices running the latest version of Google Maps for different regions, which indicates that Google Maps is rolling this feature out gradually following limited and measured testing. 

While using the Google Pixel 7a, Android Police tested various types of buildings including hotels, doctors’ offices, supermarkets, hardware stores, cafes, and restaurants in cities that include New York City, Las Vegas, San Francisco, and Berlin. Some places had these new entrance and exit markers and some didn’t, which probably means that Google is still in the process of gathering accurate and up-to-date information on these places, most likely via its StreetView tool. Another issue that came up was that some of the indicated entrances were not in the right place, but teething issues are inevitable and this problem seemed more common for smaller buildings where it’s actually easier to find the entrance once you’re there in person.

The entrances were sometimes marked by a green arrow instead of a white circle, and it’s not clear at this point exactly what it means when a green arrow or a white circle is used. Google Maps has a reputation as a very helpful, functional, and often dependable app, so whatever new features are rolled out, Google probably wants to make sure they’re up to a certain standard. I hope they complete the necessary stages of experimenting and implementing this new feature, and I look forward to using it as soon as I can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Not spending enough on Amazon already? Its new AI chatbot is here to help

If there's one tech innovation that our bank accounts didn't need in 2024, it's an Amazon chatbot with infinite knowledge of the site's array of potential impulse buys. But unfortunately for our savings, that's exactly what we've just been given in the form of Rufus.

Amazon says its Rufus chatbot has now launched in the US in beta form to “a small subset of customers” who use its mobile app, but that it'll “progressively roll out to additional US customers in the coming weeks”. Rufus is apparently “an expert shopping assistant” who's been trained on Amazon's product catalog and will help answer your questions in a conversational way.

Rather than Googling for extra advice on the differences between trail and road running shoes, the idea is that you can instead search for pointers in the Amazon app and Rufus will pop up with the answers. 

Quite how good those answers are remains to be seen, as Amazon says they come from “a combination of extensive product catalog, customer reviews, community Q&As, and information from across the web”. Considering the variable quality of Amazon's reviews, and the tendency of AI chatbots to hallucinate, you may still want to cross-reference your research with some external sources. 

Still, it's an early glimpse at the future of shopping, with retailers looking to arm you with all of the information you need so you can, well, spend more money with them. Amazon says that the questions can be as broad as “what are good gifts for Valentine’s Day?”, but also as specific as “is this cordless drill easy to hold?” if you're on a product page.

How to find and use Rufus

Right now, Rufus is only being made available to “select customers when they next update their Amazon Shopping app”. But if you live in the US and are keen to take it for a spin, it's worth updating your iOS or Android app to see if you're one of the early chosen ones.

If you are, the bar at the top of the app should now say “search or ask a question”. That's where you can fire conversational questions at Rufus, like “what to consider when buying headphones?”, or prompts like “best dinosaur toys for a 5-year-old“ or “I want to start an indoor garden”.

The ability to ask specific questions about products on their product pages also sounds handy, although this will effectively only be a summary of the page's Q&As and reviews. Given our experience with AI shopping chatbots so far, we'd be reluctant to take everything at face value without double-checking with another source.

Still, with Rufus getting a wider US rollout in “the coming weeks”, it is a pretty major change to the Amazon app – and could change how we shop with the retail giant. Amazon will no doubt be hoping it convinces us to spend more – maybe we need two chatbots, with the other one warning us about our overdraft.

You might also like

TechRadar – All the latest technology news

Read More

The Meta Quest 3 yoinks Vision Pro’s spatial video to help you relive your memories

Just as the Vision Pro launches, Meta has started rolling out software update v62 to its Meta Quest 3, Quest Pro, and Quest 2. The new software’s headline feature is it’s now a lot easier to watch your spatial video recordings on Quest hardware – stealing the Vision Pro’s best feature.

You’ve always been able to view 3D spatial video (or stereoscopic video as most people call it) on Quest hardware. And using a slightly awkward workaround you could convert spatial video recordings you’ve made using an iPhone 15 Pro into a Quest-compatible format to watch them in 3D without needing Apple’s $ 3,500 Vision Pro. But, as we predicted it would, Meta’s made this conversion process a lot simpler with v62.

Now you can simply upload the captured footage through the Meta Quest mobile app and Meta will automatically convert and send it to your headset – even giving the videos the same cloudy border as you’d see on the Vision Pro. 

You can find the recordings, and a few Meta-made demo videos, in the spatial videos section of the Files menu on your Quest headset.

iPhone 15 Pro review front flat angled handheld

You need an iPhone 15 Pro or Pro Max to record 3D video (Image credit: Future | Alex Walker-Todd)

Spatial video has been a standout feature referenced in nearly every review featured in our Apple Vision Pro review roundup – with our own Lance Ulanoff calling it an “immersive trip” after one of his demos with the Apple headset. So it’s almost certainly not a coincidence that Meta has announced it’s nabbed the feature literally as the Vision Pro is launching.

Admittedly Quest spatial video isn’t identical to the Vision Pro version as you need an iPhone 15 Pro – on the Vision Pro you can use the iPhone or the headset itself – but over time there’s one potential advantage Meta’s system could have. Non-exclusivity. 

Given that other smartphone manufacturers are expected to launch headsets of their own in the coming year or so – such as the already teased Samsung XR headset created in partnership with Google – it’s likely the ability to record 3D video will come to non-iPhones too. 

If this happens you’d likely be able to use whichever brand of phone you’d like to record 3D videos that you can then convert and watch on your Quest hardware through the Meta Quest app. Given its typical walled garden approach, you’ll likely always need an iPhone to capture 3D video for the Vision Pro and Apple’s future headsets – and Samsung, Google, and other brands that make smartphones may also impose some kind of walled garden to lock you into their hardware.

A gif showing a person pinching their fingers to open the Quest menu

(Image credit: Meta)

Other v62 improvements 

It’s not just spatial video coming in the next Quest operating system update.

Meta has added support for a wider array of controllers – including the PS5 DualSense controller and PS4 DualShock – that you can use to play games through apps like the Xbox Cloud Gaming (Beta) or the Meta Quest Browser.

Facebook Livestreaming, after being added in update v56, is now available to all Meta Quest users. So now everyone can share their VR adventures with their Facebook friends in real-time by selecting “Go Live” from the Camera icon on the Universal Menu while in VR (provided your Facebook and Meta accounts are linked through the Accounts Center). 

If you prefer YouTube streaming, it’s now possible to see your chat while streaming without taking the headset off provided you’re using OBS software.

Lastly, Meta is improving its hand-tracking controls so you can quickly access the Universal Menu by looking at your palm and doing a short pinch. Doing a long pinch will recenter your display. You can always go back to the older Quick Actions Menu by going into your Settings, searching for Expanded Quick Actions, and turning it back on.

You might also like…

TechRadar – All the latest technology news

Read More

Google’s new generative AI aims to help you get those creative juices following

It’s a big day for Google AI as the tech giant has launched a new image-generation engine aimed at fostering people’s creativity.

The tool is called ImageFX and it runs on Imagen 2,  Google's “latest text-to-image model” that Google claims can deliver the company's “highest-quality images yet.” Like so many other generative AIs before, it generates content by having users enter a command into the text box. What’s unique about the engine is it comes with “Expressive Chips” which are dropdown menus over keywords allowing you to quickly alter content with adjacent ideas. For example, ImageFX gave us a sample prompt of a dress carved out of deadwood complete with foliage. After it made a series of pictures, the AI offered the opportunity to change certain aspects; turning a beautiful forest-inspired dress into an ugly shirt made out of plastic and flowers. 

Image 1 of 2

ImageFX - generated dress

(Image credit: Future)
Image 2 of 2

ImageFX - generated shirt

(Image credit: Future)

Options in the Expressive Chips don’t change. They remain fixed to the initial prompt although you can add more to the list by selecting the tags down at the bottom. There doesn’t appear to be a way to remove tags. Users will have to click the Start Over button to begin anew. If the AI manages to create something you enjoy, it can be downloaded or shared on social media.

Be creative

This obviously isn’t the first time Google has released a text-to-image generative AI. In fact, Bard just received the same ability. The main difference with ImageFX is, again, its encouragement of creativity. The clips can help spark inspiration by giving you ideas of how to direct the engine; ideas that you may never have thought of. Bard’s feature, on the other hand, offers little to no guidance. Because it's less user-friendly, directing Bard's image generation will be trickier.

ImageFX is free to use on Google’s AI Test Kitchen. Do keep in mind it’s still a work in progress. Upon visiting the page for the first time, you’ll be met with a warning message telling you the AI “may display inaccurate info”, and in some cases, offensive content. If this happens to you, the company asks that you report it to them by clicking the flag icon. 

Also, Google wants people to keep things clean. They link to their Generative AI Prohibited Use Policy in the warning listing out what you can’t do with ImageFX.

AI updates

In addition to ImageFX, Google made several updates to past experimental AIs. 

MusicFX, the brand’s text-to-music engine, now allows users to generate songs up to 70 seconds in length as well as alter their speed. The tool even received Expressive Chips, helping people get those creative juices flowing. MusicFX even got a performance boost enabling it to pump content faster than before. TextFX, on the other hand, didn’t see a major upgrade or new features. Google mainly updated the website so it’s more navigable.

MusicFX's new layout

(Image credit: Future)

Everything you see here is available to users in the US, New Zealand, Kenya, and Australia. No word on if the AI will roll out elsewhere, although we did ask. This story will be updated at a later time.

Until then, check out TechRadar's roundup of the best AI art generators for 2024 where we compare them to each other. There's no clear winner, but they do have their specialties. 

You might also like

TechRadar – All the latest technology news

Read More

OneDrive is getting a glow-up, promising an optimized interface and power-packed features to help you navigate your files

Microsoft has given OneDrive a visual and functional makeover, rolling it out in an update for OneDrive personal users. 

The update was announced last month by Microsoft, promising a revamped OneDrive user experience with a revised sleek design and powerful new features. Now, the update is actually rolling out to OneDrive personal users.

The tech titan posted the announcement on its official blog and it’s begun the gradual rollout to users, stating that the changes will be available to all OneDrive personal users by the end of February. It elaborates that the changes are purposely designed to help users perform tasks more quickly in OneDrive, as well as find it easier to focus on their files.

One of the new features that users can look forward to is People View. This will show users their contacts with all of the files that they collaborate on together – so you don’t have to remember the names of files if they’re shared between you and a contact. Often, we can remember who we share files with or who shares files with us more easily than a specific file’s name. Additionally, users will be able to filter files by type, so if you want to see all the Word documents or Excel spreadsheets on your OneDrive, you can use specific Word or Excel filters while searching. 

 Additional OneDrive functionality 

Microsoft has also expanded the Add New button’s functionality to give users the options to both upload to OneDrive or to begin a new document. Being able to do either from a single button, Microsoft hopes this will make working on OneDrive more streamlined for users. 

It looks like these upgrades will apply to all users with a OneDrive account. You can access OneDrive on desktop with Microsoft 365 or online for free with a Microsoft account. In its announcement blog post, Microsoft also mentions that it’s open to feedback and you can provide your opinion in the OneDrive feedback portal. 

It’s a solid set of developments for OneDrive that Microsoft willlooks set to deliver for a better organized and faster serviceOneDrive, as long as these changes arrive on time. If Microsoft continues along this path for OneDrive, I could see OneDrive becoming more and more users’ choice of cloud storage. You may be able to see these changes already if you have OneDrive but everyone or should be able to access them before the end ofsome time in  February. 

You may also like…

TechRadar – All the latest technology news

Read More