Android 15’s new Bluetooth tool may alter the way users interact with their phone

Recent Android 14 betas have been a treasure trove of information about possible features coming to Android 15. We learned not too long ago that the operating system may introduce Private Space for securing sensitive information on a smartphone. Now new details are emerging on future changes that could alter how users interact with their mobile devices.

News site Android Authority unearthed these details inside the Android 14 QPR2 patch from early March. Several lines of code reference something called “Bluetooth Auto-On”. According to the publication, it will automatically activate Bluetooth connectivity if it’s turned off. They state that if someone turns it off, a toggle option will appear to give the phone the ability to turn on Bluetooth the following day. Android 15 reportedly will include text reminding users that enabling the connection is important for certain features; namely Quick Share and Find My Device.

Of course, this is all optional. You’ll still be able to deactivate Bluetooth any time you want for as long as you want without having to toggle anything. 

Insight into Bluetooth Auto-On doesn’t stop there as more information was dug up from the Android Open Source Project (AOSP) by industry insider Mishaal Rahman. Rahman states only system apps work with the tool. It’s not going to be compatible with third-party software. Also, it may not be exclusive to Android 15. There’s a chance the update could come to older OS versions; however, it won’t work on all devices.

Adapative screens

The second feature is “Adaptive Timeout” which was discovered within a developer preview for Android 15. Very little is known as the lines of code don’t reveal much.

But they do say it will automatically turn off your “screen early if you’re not using your device.” On the surface, this may seem like Screen Timeout although Rahman states it’s something totally different. Judging by its description, it operates similarly to Attention Aware on iPhone

Adaptive Timeout would utilize some sort of metric, either by detecting your face through the camera or taking collecting input through sensors, to know if you’re directly interacting with the smartphone. If you stop using the device, the feature will turn off the display. Screen Timeout, by comparison, is just a timer. The screen will stay on until the timer runs out even if you’re not interacting with the phone. An argument could also be made that, due to its proactive nature, the tool can extend a device's battery life and protect your data from prying eyes. 

What's interesting about Adaptive Timeout is it may be an exclusive update for Google Pixel. Rahman says he found evidence of the tool referencing a Google namespace, suggesting it won’t be available on the “open-source version of Android”.

As always, take everything you see here with a grain of salt. Things can always change. And be sure to check out TechRadar's list of the best Android phones if you're looking to upgrade.  

You might also like

TechRadar – All the latest technology news

Read More

Chrome’s new Declutter tool may soon help manage your 100 plus open tabs

Recent evidence suggests Chrome on Android may receive a new Tab Declutter tool to help people manage so many open tabs. Hints of this feature were discovered in lines of code on Google’s Chromium platform by 9To5Google. It’s unknown exactly how Tab Declutter will work, although there is enough information to paint a picture.

According to the report, tabs that have been unused for a long period of time “will automatically” be put away in an archive. You can then go over to the archive editor, look at what’s there, and decide for yourself whether you want to delete a tab or restore it. 

Not only could Tab Declutter help people manage a messy browser, but it might also boost Chrome’s performance. All those open tabs can eat away at a device's RAM, slowing things down to a crawl.

This isn’t the first time Google has worked on improving tab management for its browser. Back in January, the company implemented an organizer tool harnessing the power of AI to instantly group tabs together based on a certain topic.  

These efforts even go as far back as 2020, when the tech giant began developing a feature that would recommend closing certain tabs if they’ve been left alone for an extended period of time. It was similar to the new Declutter tool, though much less aggressive, since it wouldn’t archive anything. Ultimately, nothing came of it, however it seems Google is looking back at this old idea.  

Speculating on all the open tabs

As 9To5Google points out, this has the potential to “become one of the most annoying features” the company has ever made. Imagine Chrome disappearing tabs you wanted to look at without letting you know. It could get frustrating pretty fast. 

Additionally, would it be possible to set a time limit for when an unused page is allowed to be put away? Will there be an exception list telling Chrome to leave certain websites alone? We'll have the answer if and when this feature eventually goes live.

We have no word on when Tab Declutter will launch. It’s unknown if Chrome on iOS is scheduled to receive a similar upgrade as the Chromium edition. It's possible Android devices will get first dibs, then iPhones, or the iPhone may be left out in some regions that don't get a Chromium-based browser. 

9To5Google speculates the update will launch in early May as part of Chrome 125. This seems a little early if it’s still in the middle of development. Late summer to early autumn is more plausible, but we could be totally wrong. We’ll just have to wait.

Until we get more news, check out TechRadar's roundup of the best Chromebooks for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft announces Garnet – a new open source tool that could make apps run faster

Microsoft has announced a next-gen open-source cache-store system, Garnet, which it claims will bring major advances in making apps and services run faster. A cache store is a type of memory that is important for the quick storage and processing of data, and optimizing a system’s performance. 

According to Microsoft, it’s already deploying Garnet across a range of its products and services, such as Windows & Web Experiences Platform, Azure Resource Manager, and Azure Resource Graph, and that can lead to apps and services being able to run faster. 

In a surprising turn, it’s also made Garnet open-source and available for download at GitHub for free, going against Microsoft’s previous ambivalent (and somewhat downright hostile) approach to open-source. 

Microsoft's motivations for developing Garnet

Microsoft goes into detail about Garnet and what it’s been able to achieve on the Microsoft Research Blog, explaining that it takes a pretty big toll on most existing devices, due to it needing particularly powerful hardware to be able to achieve its full potential. 

The good news is that most modern PCs and laptops should come with hardware that's capable of taking advantage of Garnet, so hopefully soon most people using Windows 10 or Windows 11 will be able to make use of this innovative new tech.

In its blog post, Microsoft explains that it’s been working on a remote cache store since 2021, which would replace existing cache stores – and this work has resulted in Garnet. In a very welcome move, Microsoft has also opened up Garnet to anyone interested in learning about, implementing, and contributing to the tech on GitHub, stating that it hopes others can build on its work and expand what Garnet can do, as well as encouraging further academic research and collaboration.

Problems of legacy (read: older) cache store systems for app and software developers include that they might not be easily upgraded to add new features, or they might not work well across a variety of platforms and operating systems. Microsoft suggests that Garnet doesn’t have problems like these because it is open source and that it can lead to better-performing and faster apps.

It’s to Microsoft’s credit that it’s opened Garnet up to the public in this way, and shows both a willingness to learn from others through direct collaboration and a great degree of confidence that it’s willing to offer up its innovations for analysis. It's certainly a nice change from the anti-open source Microsoft of old. Hopefully, users can start to see real-world benefits from Garnet in the near future. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Adobe’s new AI music tool could make you a text-to-musical genius

Adobe is getting into the music business as the company is previewing its new experimental generative AI capable of making background tracks.

It doesn’t have an official name yet since the tech is referred to as Project Music GenAI. The way it works, according to Adobe, is you enter a text prompt into the AI describing what you want to hear; be it “powerful rock,” “happy dance,” or “sad jazz”. Additionally, users will be able to upload music files to the generative engine for further manipulation. There will even be some editing tools in the workflow for on-the-fly adjustments. 

If any of this sounds familiar to you, that’s because we’ve seen this type of technology multiple times before. Last year, Meta launched MusicGen for creating short instrumentals and Google opened the doors to its experimental audio engine called Instrument Playground. But what’s different about Adobe’s tool is it offers easier, yet robust editing – as far as we can tell. 

Project Music GenAI isn’t publicly available. However, Adobe did recently publish a video on its official YouTube channel showing off the experiment in detail. 

Adobe in concert

The clip primarily follows a researcher at Adobe demonstrating what the AI can do. He starts by uploading the song Habanera from Georges Bizet’s opera Carmen and then proceeds to change the melody via a prompt. In one instance, the researcher instructed Project Music to make Habanera sound like an inspirational film score. Sure enough, the output became less playful and more uplifting. In another example, they gave the song a hip-hop-style accompaniment. 

When it comes to generating fresh content, Project Music can even make songs with different tempos and structures. There is a clear delineation between the intro, the verse, the chorus, and other parts of the track. It can even create indefinitely looping music for videos as well as fade-outs for the outro.

No experience necessary

These editing abilities may make Adobe’s Project Music better than Instrument Playground. Google’s engine has its own editing tools, however they’re difficult to use. It seems you need some production experience to get the most out of Instrument Playground. Project Music, on the other hand, aims to be more intuitive.

And if you're curious to know, Meta's MusicGen has no editing tools. To make changes, you have to remake the song from scratch.

In a report by TheVerge, Adobe states the current demo utilizes “public domain content” for content generation. It’s not totally clear whether people will be able to upload their own files to the final release. Speaking of which, a launch date for Project Music has yet to be revealed although Adobe will be holding its Summit event in Las Vegas beginning March 26. Still, we reached out to the company asking for information. This story will be updated at a later time.

In the meantime, check out TechRadar's list of the best audio editor for 2024.

You might also like

TechRadar – All the latest technology news

Read More

What is OpenAI’s Sora? The text-to-video tool explained and when you might be able to use it

ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.

It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.

Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.

OpenAI Sora release date and price

In February 2024, OpenAI Sora was made available to “red teamers” – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.

“We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” says OpenAI.

In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it. 

Two dogs on a mountain podcasting

(Image credit: OpenAI)

We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.    

It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.

As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $ 20 (about £16 / AU$ 30) per month. 

But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.

What is OpenAI Sora?

You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.

OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like “woman walking down a city street at night” or “car driving through a forest” and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.

See more

To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like “a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand”.

How does OpenAI Sora work?

On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.

Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on “internet-scale data” to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.

So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.

OpenAI Sora

Sora starts messier, then gets tidier (Image credit: OpenAI)

Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.

And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.

What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.

What can you do with OpenAI Sora?

At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.

It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.

OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.

In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.

Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.

How can you use OpenAI Sora?

At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.

Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.

Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.

You might also like

TechRadar – All the latest technology news

Read More

Apple could be working on a new AI tool that animates your images based on text prompts

Apple may be working on a new artificial intelligence tool that will let you create basic animations from your photos using a simple text prompt. If the tool comes to fruition, you’ll be able to turn any static image into a brief animation just by typing in what you want it to look like. 

According to 9to5Mac, Apple researchers have published a paper that details procedures for manipulating image graphics using text commands. The tool, Apple Keyframer, will use natural language text to tell the proposed AI system to manipulate the given image and animate it. 

Say you have a photo of the view from your window, with trees in the background and even cars driving past. From what the paper suggests, you’ll be able to type commands such as ‘make the leaves move as if windy’ into the Keyframer tool, which will then animate the specified part of your photo.

You may recognize the name ‘keyframe’ if you’re an Apple user, as it’s already part of Apple’s Live Photos feature – which lets you go through a ‘live photo’ GIF and select which frame, the keyframe, you want to be the actual still image for the photo. 

Better late than never? 

Apple has been notably slow to jump onto the AI bandwagon, but that’s not exactly surprising. The company is known to play the long game and let others beat out the kinks before they make their move, as we’ve seen with its recent foray into mixed reality with the Apple Vision Pro (this is also why I have hope for a foldable iPhone coming soon). 

I’m quite excited for the Keyframer tool if it does come to fruition because it’ll put basic animation tools into the palm of every iPhone user who might not know where to even start with animation, let alone make their photos move.

Overall, the direction Apple seems to be taking in terms of AI tools seems to be a positive one. The Keyframer tool comes right off the back of Apple’s AI-powered image editing tool, which again reinforces the move towards user experience improvement rather than just putting out things that mirror the competition from companies like OpenAI, Microsoft, and Google.

I’m personally glad to see that Apple’s dive into the world of artificial intelligence tools isn’t just another AI chatbot like ChatGPT or Google Gemini, but rather focusing on tools that offer unique new features for iOS and macOS products. While this project is in the very early stages of inception, I’m still pretty hyped about the idea of making funny little clips of my cat being silly or creating moving memories of my friends with just a few word prompts. 

As for when we’ll get our hands on Keyframer, unfortunately there’s no release date in sight just yet – but based on previous feature launches, Apple willingly revealing details at this stage indicates that it’s probably not too far off, and more importantly isn’t likely to get tossed aside. After all, Apple isn’t Google.

You might also like…

TechRadar – All the latest technology news

Read More

Apple working on a new AI-powered editing tool and you can try out the demo now

Apple says it plans on introducing generative AI features to iPhones later this year. It’s unknown what they are, however, a recently published research paper indicates one of them may be a new type of editing software that can alter images via text prompts.

It’s called MGIE, or MLLM-Guided (multimodal large language model) Image Editing. The tech is the result of a collaboration between Apple and researchers from the University of California, Santa Barbara. The paper states MGIE is capable of “Photoshop-style [modifications]” ranging from simple tweaks like cropping to more complex edits such as removing objects from a picture. This is made possible by the MLLM (multimodal large language model), a type of AI capable of processing both “ text and images” at the same time.

VentureBeat in their report explains MLLMs show “remarkable capabilities in cross-model understanding”, although they have not been widely implemented in image editing software despite their supposed efficacy.

Public demonstration

The way MGIE works is pretty straightforward. You upload an image to the AI engine and give it clear, concise instructions on the changes you want it to make. VentureBeat says people will need to “provide explicit guidance”. As an example, you can upload a picture of a bright, sunny day and tell MGIE to “make the sky more blue.” It’ll proceed to saturate the color of the sky a bit, but it may not be as vivid as you would like. You’ll have to guide it further to get the results you want. 

MGIE is currently available on GitHub as an open-source project. The researchers are offering “code, data, [pre-trained models]”, as well as a notebook teaching people how to use the AI for editing tasks. There’s also a web demo available to the public on the collaborative tech platform Hugging Face. With access to this demo, we decided to take Apple’s AI out for a spin.

Image 1 of 3

Cat picture new background on MGIE

(Image credit: Cédric VT/Unsplash/Apple)
Image 2 of 3

Cat picture lightning background on MGIE

(Image credit: Cédric VT/Unsplash/Apple)
Image 3 of 3

Cat picture on MGIE

(Image credit: Cédric VT/Unsplash/Apple)

In our test, we uploaded a picture of a cat that we got from Unsplash and then proceeded to instruct MGIE to make several changes. And in our experience, it did okay. In one instance, we told it to change the background from blue to red. However, MGIE instead made the background a darker shade of blue with static-like texturing. On another, we prompted the engine to add a purple background with lightning strikes and it created something much more dynamic.

Inclusion in future iPhones

At the time of this writing, you may experience long queue times while attempting to generate content. If it doesn’t work, the Hugging Face page has a link to the same AI hosted over on Gradio which is the one we used. There doesn't appear to be any difference between the two.

Now the question is: will this technology come out to a future iPhone or iOS 18? Maybe. As alluded to at the beginning, company CEO Tim Cook told investors AI tools are coming to its devices later on in the year but didn’t give any specifics. Personally, we can see MGIE morph into the iPhone version of Google’s Magic Editor; a feature that can completely alter the contents of a picture. If you read the research paper on arXiv, that certainly seems to be the path Apple is taking with its AI.

MGIE is still a work in progress. Outputs are not perfect. One of the sample images shows the kitten turn into a monstrosity. But we do expect all the bugs to be worked out down the line. If you prefer a more hands-on approach, check out TechRadar's guide on the best photo editors for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s Snipping Tool could get new powers for taking screenshots – but is Microsoft in danger of overcomplicating things?

Windows 11’s Snipping Tool is set to get a handy feature to embellish screenshots, or at least it seems that way.

Leaker PhantomOfEarth discovered the new abilities in the app by tinkering with bits and pieces in version 11.2312.33.0 of Snipping Tool. As you can see in the tweet below, the functionality allows the user to draw shapes (and fill them with color) and lines.

See more

That means you can highlight parts of screenshots by pointing with arrows – for an instructional step-by-step tutorial you’ve made with screen grabs, for example – or add different shapes as needed.

Note that this is not in testing yet, because as noted, the leaker needed to play with the app’s configuration to get it going. However, the hidden functionality does seem to be working fine, more or less, so it’s likely that a rollout to Windows 11 testers isn’t far off.


Analysis: A feature drive with core apps

While you could furnish your screenshots from Snipping Tool with these kinds of extras simply by opening the image in Paint, it’s handy to have this feature on tap to directly work on a grab without needing to go to a second app.

Building out some of the basic Windows 11 apps is very much becoming a theme for Microsoft of late. For example, recently Snipping Tool has been testing a ‘combined capture bar’ (for easily switching between capturing screenshots or video clips), and the ability to lift text straight from screenshots which is really nifty in some scenarios.

Elsewhere, core apps like Paint and Notepad are getting an infusion of AI (with Cocreator and a rumored Cowriter addition), and there’s been a lot of work in other respects with Notepad such as adding tabs.

We think these initiatives are a good line of attack for Microsoft, although there are always folks who believe that simple apps like Snipping Tool or Notepad should be kept basic, and advanced functionality is in danger of cluttering up these streamlined utilities. We get where that sentiment comes from, but we don’t think Microsoft is pushing those boundaries yet.

Via Windows Central

You might also like…

TechRadar – All the latest technology news

Read More

Google’s Nearby Share tool appears to adopt Samsung’s similar utility name and we wonder what’s going on

Google has suddenly changed the name of its file-sharing tool from Nearby Share to Quick Share which is what Samsung calls its own tool.

It’s a random move that has people scratching their heads wondering what it could mean for Android in the future. This update appears to have been discovered by industry insider Kamila Wojiciechowska who displayed her findings on X (the platform formerly known as Twitter). Wojiciechowska revealed that she received a notification on her phone informing her of the change after installing Google Mobile Services version 23.50.13. 

In addition to the new name, Google altered the logo for the feature as well as the user interface. The logo will now consist of two arrows moving toward each other in a half-circle motion on a blue background. Regarding the UI, it will now display a Quick Settings tile for fast configuration, text explaining what the various options do, and an easier-to-use interface. There’s even a new ability, allowing people to restrict Quick Share visibility down to ten minutes.

Wojieciechowska states this update is not widely available nor is the Nearby Share change common among the people who do receive the patch. This may be something only a handful will receive. She admits to being confused as to why Google is doing this, although it appears this could be the start of a new collaboration between the two companies according to found evidence.

Start of a new partnership

Android Authority in their report claims Wojieciechowska discovered proof of a “migration education flow” for Quick Share after digging through the Play Services app. This could suggest Google and Samsung are combining their file-sharing tools into one. Or at the very least, “making them interoperable”. 

If this is the case, two of the biggest Android brands coming together to unify their services could be a huge benefit for users. Currently separate and similarly behaving features might, if this is any evidence, coalesce into one that’ll work with both Galaxy and non-Galaxy smartphones alike. It's a quality-of-life upgrade that'll reduce software clutter.

Android Authority makes it clear, though, that there isn’t any concrete proof stating the two tools will merge. It’s just given the set of circumstances that seems to be the case. Plus, the whole thing wouldn’t make sense if it wasn’t the result of an upcoming collaboration. Think about it. Why would Google decide to give one of its mobile tools the same name as one of its competitor’s software? That might confuse users. 

There has to be something more to it so we reached out to both companies for more information. This story will be updated at a later time.

Until then, check out TechRadar's list of the best smartphone for 2023.

You might also like

TechRadar – All the latest technology news

Read More