Google Maps is getting a new update that’ll help you discover hidden gems in your area thanks to AI – and I can’t wait to try it out

It looks like Google Maps is getting a cool new feature that’ll make use of generative AI to help you explore your town – grouping different locations to make it easier to find restaurants, specific shops, and cafes. In other words, no more sitting around and mulling over where you want to go today!

Android Authority did an APK teardown (which basically means decompiling binary code within a program into a programming language that can be read normally) which hints at some new features on the horizon. The code within the Google Maps beta included mention of generative AI, which led Android Authority to Google Labs. If you’re unfamiliar with Google Labs, it’s a platform where users can experiment with Google’s current in-development tools and AI projects, like Gemini Chrome extensions and music ‘Time Travel’. 

So, what exactly is this new feature that has me so excited? Say you’re really craving a sweet treat. Instead of going back to your regular stop or simply Googling ‘sweet treats near me’, you’ll be able to ask Google Maps for exactly what you’re looking for and the app will give you suggestions for nearby places that offer it. Naturally, it will also provide you with pictures, ratings, and reviews from other users that you can use to make a decision.

Sweet treat treasure hunter 

I absolutely love the idea and I really hope we get to see the feature come to life as someone who has a habit of going to the same places over and over again because I either don’t know any alternatives or just haven’t discovered other parts of my city. The new feature has the potential to offer a serious upgrade to Google Maps’ more specific location search abilities, beyond simply typing in the name of the shop you want or selecting a vague group like ‘Restaurants’ as you can currently. 

You’ll be able to see your results into categories, and if you want more in-depth recommendations you can ask follow-up questions to narrow down your search – much in the same way that AI assistants like Microsoft Copilot can ‘remember’ your previous chat history to provide more context-sensitive results. I often find myself craving a little cake or a delicious cookie, so if I want that specific treat I can specify to the app what I’m craving and get a personalized list of reviewed recommendations. 

We’re yet to find out when exactly to expect this new feature, and without an official announcement, we can’t be 100% certain that it will ever make a public release. However, I’m sure it would be a very popular addition to Google Maps, and I can’t wait to discover new places in my town with the help of an AI navigator.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft’s Notepad goes from a simple text editor to a mini-Word thanks to spell check and autocorrect – but could it lose its charm?

The once-unloved Microsoft Notepad app continues to get new features, with spell check and autocorrect reportedly coming to the Windows staple next. Originally debuting as a heavily stripped-down version of Microsoft Word, Notepad is now beginning to resemble Word more and more with each successive update. 

This latest Notepad update is currently only available in Windows 11 Preview Build 26085, which you can get through the Windows Insider Program, Microsoft’s community for professionals and Windows enthusiasts to try out new Windows versions and features before they’re released to the wider user base.

According to MSPowerUser, the upgraded Notepad app (version 11.2402.18.0) is available in both the Dev and Canary release channels of the Windows Insider Program. Apparently, the update will also allow users to customize how these new features are used. This is good news, as Notepad is widely known as a simple text editor, and I’m sure many users will prefer to keep it that way.

Windows Insider @PhantomOfEarth shared the Notepad upgrade on X (formerly Twitter), where he noted that the features are currently being tested by Microsoft ahead of a wider rollout. He also shared a screenshot of what Notepad’s settings page will look like and some of the new settings that users will be able to adjust (specifically, being able to turn autocorrect and spell check on and off).

See more

While not seen in this screenshot, MSPowerUser claims that additional settings will allow users to tailor their feature preferences even further by selecting which file types the new features apply to. It also reports that beyond Notepad, Microsoft is experimenting with new sections in the Windows 11 settings menu and new user interface (UI) animations that will be included in this Windows preview build.

Early user reception of the new Notepad

The introduction of spell check and autocorrect into Notepad follows the recent introduction of Cowriter, an artificial assistant (AI) writing assistant, which was seen in a previous preview build.

Cowriter didn’t get the warmest user response, as again, Notepad is Windows’ staple ‘simple text app’, and many users aren’t interested in additional bells and whistles. It’s also a pretty overt attempt by Microsoft to carry out its promise to inject AI into as much of the user experience in Windows as possible, which has rubbed some users the wrong way. 

It does seem that Microsoft may have taken note of this backlash in its attempts to try and flesh out Notepad further, with it giving the users options in settings to turn the new features on and off, and tailor what file types they apply to. I think this is wise and Microsoft would do well to keep this behavior up, especially if it insists on changing and removing apps that users love and have gotten used to over decades.  After all, Microsoft killed off WordPad just a few months ago – but that doesn’t mean we all want Notepad to simply replace it. Sometimes, simplicity is better. 

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft Paint could get Midjourney-like powers soon thanks to a surprise AI upgrade

Microsoft has been paying quite a lot of attention to its once-forgotten Paint app recently, which had gone years without any meaningful updates or new features. Now, it seems like the app is getting yet another upgrade – a Midjourney-like ability to generate AI art in real-time. 

So, what does that mean? If you’re unfamiliar with the popular image generator Midjourney, it’s an AI-powered tool that allows you to type in a text prompt to generate an image in a style of your choosing – be it paintwork, photorealism, or even pixel art.

The rumor comes from the credible Windows leaker PhantomOfEarth on X (formerly Twitter), who made a post stating that “The upcoming AI feature for paint may be something known as ‘LiveCanvas’”. While the leaker isn’t entirely sure what exactly the feature will be, it does sound very familiar to Leonardo.Ai’s Real-Time Canvas.

See more

Real-Time Canvas allows you to draw in one window and watch in a second window as generative AI brings your art to life – like a sort of artistic auto-fill. This would fit perfectly in Microsoft Paint – users would be able to sketch out their ideas or create art and use the generative AI technology to add to it. Microsoft already has some basic (and, if I’m being honest, kind of average) AI-powered image generation within Paint, so it would make sense to add a more interactive feature like this rather than simply a repeat of something they already have. 

We’re quite excited to see how this tool could help budding artists looking to experiment with generative AI, since it’ll be available free in Windows. With the ability to draw in one window and edit in another, you can create the barebones of your outwork and add finer details with the AI. It's approaching a more 'moral' application of generative AI – one that doesn't simply cut out the human creator entirely.

We don’t know much about expected release dates or even have a rough idea of what the feature would look like outside of PhantomOfEarth’s post – and, as always, we should take leaks like this with a side of salt. Likely, the feature will eventually make its way to the Windows Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. So, we’ll have to wait and see if it comes to fruition – and get doodling. 

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft is giving Windows Copilot an upgrade with Power Automate, promising to banish boring tasks thanks to AI

Microsoft has revealed a new plug-in for Copilot, its artificial intelligence (AI) assistant, named Power Automate that will enable users to (as the name suggests) automate repetitive and tedious tasks, such as creating and manipulating entries in Excel, handling PDFs, and file management. 

This development is part of a bigger Copilot update package that will see several new capabilities being added to the digital AI assistant.

Microsoft gives the following examples of tasks this new Copilot plug-in could automate: 

  • Write an email to my team wishing everyone a happy weekend.
  • List the top 5 highest mountains in the world in an Excel file.
  • Rename all PDF files in a folder to add the word final at the end.
  • Move all word documents to another folder.
  • I need to split a PDF by the first page. Can you help?

Who can get the Power Automate plug-in and how

As of now, it seems like this plug-in is only available to some users with access to Windows 11 Preview Build 26058, available to Windows Insiders in the Canary and Dev Channels of the Windows Insider Program. The Windows Insider Program is a Microsoft-run community for Windows enthusiasts and professionals where users can get early access to upcoming versions of Windows, features, and more, and provide feedback to Microsoft developers to improve these before a wider rollout.

Hopefully, the Power Automate plug-in for Copilot will prove a hit with testers – and if it is, we should hopefully see it rolled out to all Windows 11 users soon.

As per the blog post announcing the Copilot update, this is the first release of the plug-in, which is part of Microsoft’s Power Platform, a comprehensive suite of tools designed to help users make their workflows more efficient and versatile – including Power Automate. To be able to use this plug-in, you’ll need to download Power Automate for Desktop from the Microsoft Store (or make sure you have the latest version of Power Automate). 

There are multiple options for using Power Automate:  the free plan, suitable for personal use or smaller projects, and there are premium plans that offer packages with more advanced features. From what we can tell, the ability to enable the Power Automate plug-in for Copilot will be available for all users, free and premium, but Microsoft might change this.

Once you’ve made sure you have the latest version of Power Automate downloaded, you’ll also need to be signed into Copilot for Windows with a Microsoft Account. Then you’ll need to add the plug-in to Copilot To do this, you’ll have to go to the Plug in section in the Copilot app for Windows, and turn on the Power Automate plug-in which should now be visible. Once enabled, you should be able to ask it to perform a task like one of the above examples and see how Copilot copes for yourself.

Once you try the plug-in for yourself, if you have any thoughts about it, you can share them with Microsoft directly at [email protected]

Copilot in Windows

(Image credit: Microsoft)

Hopefully, a sign of more to come

The language Microsoft is using about the plug-in implies that it will see improvements in the future to enable it and, therefore, Copilot to carry out more tasks. Upgrades like this are steps in the right direction if they’re as effective as they sound. 

This could address one of the biggest complaints people have about Copilot since it was launched. Microsoft presented it as a Swiss Army Knife-like digital assistant with all kinds of AI capabilities, and, at least for now, it’s not anywhere near that. While we admire Microsoft’s AI ambitions, the company did make big promises, and many users are growing impatient. 

I guess we’ll have to just continue to watch whether Copilot will live up to Microsoft’s messaging, or if it’ll go the way of Microsoft’s other digital assistants like Cortana and Clippy.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 11 with no taskbar? A crucial part of Microsoft’s OS has gone missing for some thanks to new update

Windows 11’s latest cumulative update comes with an odd bug, it seems, one that reportedly causes the taskbar to disappear – or rather, to become blank space.

The February patch (KB5034765) for Windows 11 23H2 (and 22H2) has seen a number of complaints from users who have witnessed their desktop being affected by this apparent glitch, which as you can imagine is pretty frustrating.

As Neowin flagged up, there are reports on Microsoft’s Feedback Hub and the Reddit mega-thread on said patch from folks who have been hit by this problem.

One Redditor wrote: “Yay, both my Win11 23h2 workstations have no taskbar after updates and a reboot… have to kill explorer and relaunch.”

Somebody replied to that: “This is happening to me as well I thought something broke, but removing KB5034765 resolved it for me. I don’t even see explorer.exe in my running tasks when that happened, though.”

There are a number of other reports, as mentioned, with those affected not able to launch their taskbar pinned apps (as the icons aren’t there, of course), or see the system tray, access Quick Settings and so on. The basic ability to see your running apps and switch between them on the bar is obviously missing in action, too.


Analysis: Have a little patience?

We should note that in the interest of balance, a lot of folks on that Reddit thread are saying they had no issues with KB5034765. It’s not clear how widespread the vanishing taskbar gremlin might be, and Microsoft has not acknowledged the problem yet in its known issues – but we get the feeling it has a limited impact, looking at the overall feedback on this patch.

As noted above, the only solution seems to be uninstalling the February cumulative update, which certainly works to return the taskbar to its normal state.

The slight twist here is that this problem has been seen before, and another Redditor offers up a theory as follows: “The taskbar missing thing is part of the EU policy updates. Taskbar is not showing for up to 10 minutes, it’s normal and has been in the Release Preview Channel for 2-3 months.”

This makes some sense, as there’s some heavy duty work on the taskbar going on with those EU regulation-related changes, like unhooking Bing from the search box on the bar.

So, in theory the taskbar may reappear soon after applying the update – maybe. But we presume given the number of affected folks, with no one else observing said reappearance that we can see, there could be more to this issue than merely this. Unless everyone hitting the snag is uninstalling KB5034765 pretty sharpish, which seems unlikely across the board.

You might also like…

TechRadar – All the latest technology news

Read More

YouTube Shorts gains an edge over TikTok thanks to new music video remix feature

YouTube is revamping the Remix feature on its ever popular Shorts by allowing users to integrate their favorite music videos into content.

This update consists of four tools: Sound, Collab, Green Screen, and Cut. The first one lets you take a track from a video for use as background audio. Collab places a Short next to an artist’s content so you can dance alongside it or copy the choreography itself. Green Screen, as the name suggests, allows users to turn a music video into the background of a Short. Then there’s Cut, which gives creators the ability to remove a five-second portion of the original source to add to their own content and repeat as often as they like. 

It’s important to mention that none of these are brand new to the platform as they were actually introduced years prior. Green Screen, for instance, hit the scene back in 2022 although it was only available on non-music videos.

Remixing

The company is rolling out the remix upgrade to all users, as confirmed by 9To5Google, but it’s releasing it incrementally. On our Android, we only received a part of the update as most of the tools are missing. Either way, implementing one of the remix features is easy to do. The steps are exactly the same across the board with the only difference being the option you choose.

To start, find the music video you want to use on the mobile app and tap the Remix button. It’ll be found in the description carousel. Next, select the remix tool. At the time of this writing, we only have access to Sound so that’ll be the one we’ll use.

YouTube Short's new Remix tool for Music Videos

(Image credit: Future)

You will then be taken to the YouTube Shorts editing page where you highlight the 15-second portion you want to use in the video. Once everything’s sorted out, you’re free to record the Short with the music playing in the back.

Analysis: A leg over the competition

The Remix feature’s expansion comes at a very interesting time. Rival TikTok recently lost access to the vast music catalog owned by Universal Music Group (UMG), meaning the platform can no longer host tracks by artists represented by the record label. This includes megastars like Taylor Swift and Drake. TikTok videos with “UMG-owned music” will be permanently muted although users can replace them with songs from other sources.

The breakup between UMG and TikTok was the result of contract negotiations falling through. Apparently, the social media platform was trying to “bully” the record label into accepting a bad deal that wouldn’t have adequately protected artists from generative AI and online harassment.  

YouTube, on the other hand, was more cooperative. The company announced last August they were working with UMG to ensure “artists and right holders would be properly compensated for AI music.” So creators on YouTube are safe to take whatever songs they want from the label – for now. It's possible future negotiations between these two entities will turn sour down the line.

If you're planning on making YouTube Shorts, you'll need a smartphone with a good camera. Be sure to check out TechRadar's list of the best iPhone for 2024 if you need some recommendations.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini AI can now handle bigger prompts thanks to next-gen upgrade

Google’s Gemini AI has only been around for two months at the time of this writing, and already, the company is launching its next-generation model dubbed Gemini 1.5.

The announcement post gets into the nitty-gritty explaining all the AI’s improvements in detail. It’s all rather technical, but the main takeaway is that Gemini 1.5 will deliver “dramatically enhanced performance.” This was accomplished with the implementation of a “Mixture-of-Experts architecture” (or MoE for short) which sees multiple AI models working together in unison. Implementing this structure made Gemini easier to train as well as faster at learning complicated tasks than before.

There are plans to roll out the upgrade to all three major versions of the AI, but the only one being released today for early testing is Gemini 1.5 Pro. 

What’s unique about it is the model has “a context window of up to 1 million tokens”. Tokens, as they relate to generative AI, are the smallest pieces of data LLMs (large language models) use “to process and generate text.” Bigger context windows allow the AI to handle more information at once. And a million tokens is huge, far exceeding what GPT-4 Turbo can do. OpenAI’s engine, for the sake of comparison, has a context window cap of 128,000 tokens. 

Gemini Pro in action

With all these numbers being thrown, the question is what does Gemini 1.5 Pro look like in action? Google made several videos showcasing the AI’s abilities. Admittedly, it’s pretty interesting stuff as they reveal how the upgraded model can analyze and summarize large amounts of text according to a prompt. 

In one example, they gave Gemini 1.5 Pro the over 400-page transcript of the Apollo 11 moon mission. It showed the AI could “understand, reason about, and identify” certain details in the document. The prompter asks the AI to locate “comedic moments” during the mission. After 30 seconds, Gemini 1.5 Pro managed to find a few jokes that the astronauts cracked while in space, including who told it and explained any references made.

These analysis skills can be used for other modalities. In another demo, the dev team gave the AI a 44-minute Buster Keaton movie. They uploaded a rough sketch of a gushing water tower and then asked for the timestamp of a scene involving a water tower. Sure enough, it found the exact part ten minutes into the film. Keep in mind this was done without any explanation about the drawing itself or any other text besides the question. Gemini 1.5 Pro understood it was a water tower without extra help.

Experimental tech

The model is not available to the general public at the moment. Currently, it’s being offered as an early preview to “developers and enterprise customers” through Google’s AI Studio and Vertex AI platforms for free. The company is warning testers they may experience long latency times since it is still experimental. There are plans, however, to improve speeds down the line.

We reached out to Google asking for information on when people can expect the launch of Gemini 1.5 and Gemini 1.5 Ultra plus the wider release of these next-gen AI models. This story will be updated at a later time. Until then, check out TechRadar's roundup of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

These new smart glasses can teach people about the world thanks to generative AI

It was only a matter of time before someone added generative AI to an AR headset and taking the plunge is start-up company Brilliant Labs with their recently revealed Frame smart glasses.

Looking like a pair of Where’s Waldo glasses (or Where’s Wally to our UK readers), the Frame houses a multimodal digital assistant called Noa. It consists of multiple AI models from other brands working together in unison to help users learn about the world around them. These lessons can be done just by looking at something and then issuing a command. Let’s say you want to know more about the nutritional value of a raspberry. Thanks to OpenAI tech, you can command Noa to perform a “visual analysis” of the subject. The read-out appears on the outer AR lens. Additionally, it can offer real-time language translation via Whisper AI.

The Frame can also search the internet via its Perplexity AI model. Search results will even provide price tags for potential purchases. In a recent VentureBeat article, Brilliant Labs claims Noa can provide instantaneous price checks for clothes just by scanning the piece, or fish out home listings for new houses on the market. All you have to do is look at the house in question. It can even generate images on the fly through Stable Diffusion, according to ZDNET

Evolving assistant

Going back to VentureBeat, their report offers a deeper insight into how Noa works. 

The digital assistant is always on, constantly taking in information from its environment. And it’ll apparently “adopt a unique personality” over time. The publication explains that upon activating for the first time, Noa appears as an “egg” on the display. Owners will have to answer a series of questions, and upon finishing, the egg hatches into a character avatar whose personality reflects the user. As the Frame is used, Noa analyzes the interactions between it and the user, evolving to become better at tackling tasks.

Brilliant Labs Frame exploded view

(Image credit: Brilliant Labs)

An exploded view of the Frame can be found on Brilliant Labs’ official website providing interesting insight into how the tech works. On-screen content is projected by a micro-OLED onto a “geometric prism” in the lens. 9To5Google points out this is reminiscent of how Google Glass worked. On the nose bridge is the Frame’s camera sitting on a PCBA (printed circuit board assembly). 

At the end of the stems, you have the batteries inside two big hubs. Brilliant Labs states the frames can last a whole day, and to charge them, you’ll have to plug in the Mister Power dongle, inadvertently turning the glasses into a high-tech Groucho Marx impersonation.

Brilliant Labs Frame with Mister Power

(Image credit: Brilliant Labs)

Availability

Currently open for pre-order, the Frame will run you $ 350 a pair. It’ll be available in three colors: Smokey Black, Cool Gray, and the transparent H20. You can opt for prescription lenses. Doing so will bump the price tag to $ 448.There's a chance Brilliant Labs won’t have your exact prescription. They recommend to instead select the option that closely matches your actual prescription. Shipping is free and the first batch rolls out April 15.

It appears all of the AI features are subject to a daily usage cap. Brilliant Labs has plans to launch a subscription service lifting the limit. We reached out to the company for clarification and asked several other questions like exactly how does the Frame receive input? This story will be updated at a later time.

Until then, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Your Microsoft OneDrive storage is about to get smarter thanks to this time-saving Copilot feature

Microsoft’s on fire recently with the addition of some super-useful features thanks to its artificial intelligence assistant Copilot, and it looks like OneDrive is finally getting a much-needed AI boost. Soon, you’ll be able to search through your files without having to open them to find the relevant info simply by asking Copilot the question you want answered. 

Say you’re looking for a specific figure or quote but you have too many files to start searching, or you’re like me and don’t organize anything into folders at all (oops). Instead of opening every document and scanning through to find the specific bit of info you’re looking for, you’ll be able to pull up Copilot and tell it what you want to find. You could ask it to find a specific bit of info from a lecture presentation, or group project, and Copilot will go through the files and provide the relevant answers. 

According to MSPoweruser, this feature will work across multiple file types including DOC, DOCX, PDF, TXT,  and more, so you won’t be restricted to just Word documents. 

The feature is included in Microsoft’s 365 roadmap, due to be released to users sometime in May 2024. Hopefully, we’ll see this trickle down to Microsoft’s free Office for Web suite (formerly known as Office Online) which includes an in-browser version of Microsoft Word and 5GB of OneDrive cloud storage. 

A win for the unorganized girlies

This feature alone is enough to entice me away from Google Drive just for the convenience alone. There’s nothing worse than having to crawl through your folders and files to find something you’re looking for. 

I would have appreciated this feature when I was at university, especially with how many notes and textbooks I had scattered around my school One Drive account. By bringing Copilot into the mix, I could have found whatever I was looking for so much faster and saved myself from a fair amount of panic. 

If you work in an industry where you’re constantly dealing with new documents with critical information every day, or a student consistently downloading research papers or textbooks, this new addition to Copilot's nifty AI-powered skill set is well worth keeping an eye out for. 

While I am disappointed this feature will be locked behind the Microsoft 365 subscription, it’s not surprising – Microsoft is investing a lot of time and money into Copilot, so it makes sense that it would use its more advanced features to encourage people to pay to subscribe to Microsoft 365. However, there’s a danger that if it paywalls all the most exciting features, Copilot could struggle to be as popular as it deserves to be. Microsoft won’t want another Clippy or Cortana on its hands.

You might also like…

TechRadar – All the latest technology news

Read More

YouTube has arrived on the Apple Vision Pro, though it’s not thanks to Google

There's been a lot of chatter this week about just how many apps are available inside the Apple Vision Pro, and it seems third-party developers are taking up the challenge of filling in any notable gaps in the app selection.

As per MacRumors, developer Christian Selig has released a dedicated YouTube app for the Vision Pro, called Juno for YouTube. Notably, it's the only YouTube client on the headset, as Google hasn't released an official app.

Costing $ 4.99, the app comes with a number of useful features, including options to resize and reposition the playback window, as well as dim the area surrounding the video for that virtual cinema theater feeling inside mixed reality.

As we already know, Google has specifically said it doesn't currently have plans to develop a YouTube app for the Vision Pro. For the time being, the only official way to get at YouTube in the Apple headset is to load it up through Safari.

There might be an app for that

Juno for YouTube app

It’s a better experience than the YouTube website (Image credit: Juno for YouTube)

Initial worries over app availability on the Vision Pro were somewhat assuaged as the device went on sale, with news that more than 600 apps are on the way soon (though the current selection is much smaller).

We've already seen Adobe make the leap into mixed reality, with its Firefly AI app. You can use it to create images generated by artificial intelligence, from any text prompt – with the end results floating in front of your eyes.

However, there are notable holdouts, including Netflix and Spotify, as well as Google. While YouTube does allow developers some access to its inner workings, that's not the case with Netflix or Spotify, so don't expect third-party clients for them.

Clearly the limited number of people who actually have an Apple Vision Pro is making software developers think twice about whether or not to support the hardware – but based on our time with the headset, it's likely to get more popular very quickly.

You might also like

TechRadar – All the latest technology news

Read More