Windows 11’s Photos app is getting more sophistication with new Designer app integration – but there’s a catch

Windows 11’s Photos app has been getting some impressive upgrades recently, and it looks like another one is on the way. The app is getting Designer web app integration, which is Microsoft’s tool that enables people to make professional-looking graphics, but there’s one little catch – it’ll prompt Designer to open in Edge (Microsoft’s web browser that comes installed with Windows 11). 

The new Designer integration joins a line-up of other features that have been added in the last two years, including the background blur feature, an AI magic eraser, and more. The new feature is  accessible via an 'Edit in Microsoft Designer' option within the Photos app, represented by an icon that will appear in the middle of the Preview window. 

It’s not the most subtle position for it, and I think it’s fair to assume Microsoft is doing that because it wants users to click it. Doing so will take users to the Microsoft Designer website which opens in an Edge window – and due to Edge not being the most popular of web browsers, this could irritate people who have set their default browser to a different app, such as Chrome

This development is still in the testing stages, according to Windows Latest, making its way through the Windows Insider Program. The feature can be found in Photos app version 2024.11040.16001.0, which is a part of the Windows 11 24H2 preview build in the Canary channel. The feature should also be available in the Windows 11 Insider Dev channel build, but the Photos app version has to be version 2024.11040.16001.0.

Apparently, you can also prompt the Designer web app to open by right-clicking the image while in Preview in the Photos app, and clicking ‘Edit in Designer online’ in the menu that appears.

Woman relaxing on a sofa, holding a laptop in her lap and using it

(Image credit: Shutterstock/fizkes)

The apparent state of the new feature

When it tried to activate the new feature, Windows Latest hit a wall as it was presented with a blank canvas in Designer, rather than the image that was going to be edited. Hopefully, this is an anomaly or an error, and it presumably will result in the image you’re looking at in Preview in the Photos app opening up in Designer when the feature is fully rolled out in a Windows update. 

Windows Latest made several attempts at making the feature function as intended, but it wasn’t to be, and I would hope that Microsoft takes this feedback on board, especially if it’s a widespread issue. You can import the image manually while having the Designer web app already open, but this will defeat the purpose of having an easily accessible option in the Photos app. 

Users can edit their image in Designer, but only if they’ve signed into their Microsoft account. Microsoft wrote about the feature in an official Windows Blogs post, explaining that it’s currently being tested in the US, UK, Australia, Ireland, India, and New Zealand.

Having various image editing tools scattered across the Photos app, the Designer web app, and the Paint app doesn’t make things easy for Windows users. People like accessing all the relevant tools from whatever app they’re currently using instead of having to memorize which app has what exclusive feature. 

The approach has been called ‘inconsistent’ by Windows Latest, and I would bet that it’s not alone in that opinion. While it’s clear that Microsoft wants to get people using its new AI-powered tools, the company would be much better served if made them easier to access through one powerful program, rather than being scattered around Windows 11.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

ChatGPT’s newest GPT-4 upgrade makes it smarter and more conversational

AI just keeps getting smarter: another significant upgrade has been pushed out for ChatGPT, its developer OpenAI has announced, and specifically to the GPT-4 Turbo model available to those paying for ChatGPT Plus, Team, or Enterprise.

OpenAI says ChatGPT will now be better at writing, math, logical reasoning, and coding – and it has the charts to prove it. The release is labeled with the date April 9, and it replaces the GPT-4 Turbo model that was pushed out on January 25.

Judging by the graphs provided, the biggest jumps in capabilities are in mathematics and GPQA, or Graduate-Level Google-Proof Q&A – a benchmark based on multiple-choice questions in various scientific fields.

According to OpenAI, the new and improved ChatGPT is “more direct” and “less verbose” too, and will use “more conversational language”. All in all, a bit more human-like then. Eventually, the improvements should trickle down to non-paying users too.

More up to date

See more

In an example given by OpenAI, AI-generated text for an SMS intended to RSVP to a dinner invite is half the length and much more to the point – with some of the less essential words and sentences chopped out for simplicity.

Another important upgrade is that the training data ChatGPT is based on now goes all the way up to December 2023, rather than April 2023 as with the previous model, which should help with topical questions and answers.

It's difficult to test AI chatbots from version to version, but in our own experiments  with ChatGPT and GPT-4 Turbo we found it does now know about more recent events – like the iPhone 15 launch. As ChatGPT has never held or used an iPhone though, it's nowhere near being able to offer the information you'd get from our iPhone 15 review.

The momentum behind AI shows no signs of slowing down just yet: in the last week alone Meta has promised human-like cognition from upcoming models, while Google has made its impressive AI photo-editing tools available to more users.

You might also like

TechRadar – All the latest technology news

Read More

Meta is on the brink of releasing AI models it claims to have “human-level cognition” – hinting at new models capable of more than simple conversations

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like…

TechRadar – All the latest technology news

Read More

Elon Musk brings controversial AI chatbot Grok to more X users in bid to halt exodus

Premium subscribers of all tiers for the X social media platform will soon gain access to its generative AI chatbot, Grok. Previously, the chatbot was only accessible to users who subscribed to the most expensive subscription tier, Premium+,  for $ 16 a month (approximately £12 or AU$ 25). That’s set to change, with X’s owner Elon Musk announcing the expansion of availability to the large language model (LLM) to Basic Tier and Premium Tier X users in a post. 

Grok has been made open-source, reportedly to allow researchers and developers to leverage Grok’s capabilities for their own projects and research. If you’re interested in checking out its code, you can check out the Grok-1 repository on GitHub. It’s the first major offering from Musk’s own AI venture, xAI

As Dev Technosys, a mobile app and web development company, explains, Grok is Musk’s head-on challenge to ChatGPT, with the billionaire boasting that it beat ChatGPT 3.5 on multiple benchmarks. Musk describes the chatbot as having “a focus on deep understanding and humor,” and replying to questions with a “rebellious streak.” The model is trained on a massive dataset of text and code, including real-time text from X posts (which is what Musk points to as giving the bot a unique advantage), and text data scraped from across the web such as Wikipedia articles and academic papers.

Some industry observers think that this could be a push to boost X subscriber numbers, as analysis performed by Sensor Tower and reported by NBC indicates that visitors to the platform and user retention have been dropping. This has seemingly spooked many advertisers and hit the platform’s revenues, with apparently 75 of the top 100 US advertisers cutting X from their ad budgets entirely from October 2022 onwards. 

It does look like Musk is hoping that an exclusive perk like access to such a well-informed and entertaining chatbot as Grok will convince people to become subscribers, and to keep those who are already subscribed. 

Man wearing glasses, sitting at a table and using a laptop

(Image credit: Shutterstock/fizkes)

The Elon-Musk led ChatGPT that never was

Earlier this year, Musk leveled a lawsuit against what is undoubtedly Grok’s largest competitor and the current industry leader in generative AI, OpenAI. He was an early investor in the company but departed after disagreements about several aspects, including the mission and vision for OpenAI, as well as control and equity in the company. Now, Musk asserts that OpenAI has diverted from its non-profit goals and is prioritizing corporate profits, particularly for Microsoft (a key investor and collaborator), above its other objectives –  violating a contract called the ‘Founding Agreement.’

According to Musk, the Founding Agreement laid down specific principles and commitments that OpenAI had agreed to follow. OpenAI has responded to this accusation by denying such a contract, or any similar agreement, existed with Musk at all. Its overall response to the lawsuit so far has been dismissive, characterizing it as ‘frivolous’ and alleging that Musk is driven by his own business interests. 

Apparently, it was established from early on by OpenAI that the company would transition into being a for-profit organization, as it wouldn’t be able to raise the funds necessary to build the sorts of things it was planning to as a non-profit company. OpenAI claims Musk was not only aware of these plans and was consulted when they were being made, but that he was seeking to have majority equity in OpenAI, wanted to control the board of directors at the time, and wanted to assume the position of CEO. 

Elon Musk wearing a suit and walking in New York

(Image credit: Shutterstock/photosince)

Elon Musk's Grok gambit

Musk didn’t give an exact date for Grok’s wider rollout, but according to Tech Crunch, it’s due sometime at the end of this week. Having seen what Musk considers funny, many people are morbidly curious about what sort of artificial intelligence Grok offers. One other aspect of Grok that might concern (or please, depending on your point of view) people is that it will respond to queries and topics that have been made off-limits for the most part with other chatbots, including controversial political ideas and conspiracy theories. 

The sourcing from X in real-time is one unique advantage that Grok has, although before Musk’s takeover, this would have arguably been a much bigger prize.

Despite my misgivings, Grok does give users another option of chatbot to choose from, and more competition in this emerging field could spur on more innovation as companies battle to win users.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is pushing out Copilot AI to more Windows 11 users – ready or not – and Windows 10 will follow shortly

Microsoft just announced that Copilot is rolling out to more Windows 11 users right now, and also it’ll be inbound to more Windows 10 users soon enough.

Neowin spotted the revelation in the Windows message center where Microsoft let us know that Copilot is coming to a wider audience – so, if you haven’t seen the AI assistant yet, you may well do soon enough.

Microsoft also let us know that from this week, it’s possible to use up to 10 queries with Copilot before you have to sign in to your Microsoft account. So, you can give the AI a bit of a try even if you don’t have an active Microsoft account on your Windows installation.

The ‘new wave’ of Copilot additions is happening now with Windows 11 (23H2 and 22H2), at least for consumers (with businesses, it will depend on admin policies). And eligible Windows 10 devices on Home or Pro versions (22H2) will start to get Copilot in this broader rollout later in March – so within the next week.

Microsoft tells us: “This current rollout phase will reach most of its targeted Windows 11 and 10 devices by the end of May.”

Meanwhile, Microsoft is also busy expanding Copilot’s repertoire of tricks regarding changing Windows settings, though it’s very slow going on that front thus far.


Analysis: AI for everyone

It sounds like most folks will have Copilot by the end of May, then. We’ve already seen it arrive on our Windows 10 PC, so that rollout is definitely already underway – it’s just about to step up to another level.

How will you know if you get Copilot? You can’t miss the colorful icon which will appear in the taskbar, on the far right (in the system tray). It’s marked with a ‘Pre’ on the icon to denote that the AI is still in preview, so it’s still possible to experience wonky or odd behavior when running queries with Copilot.

While you can turn off the Copilot icon if you don’t want to see it, you can’t actually remove the AI from Windows as such (not yet) – it’ll still be lurking in the background, even if you never access it. That said, there are ways to extract Copilot from your Windows installation, such as using third-party apps (though we wouldn’t recommend doing so, as previously discussed).

You might also like…

TechRadar – All the latest technology news

Read More

We may have our first look at the more affordable Meta Quest 3 Lite

Open up our Meta Quest 3 review and you'll see the virtual reality headset has a not-unreasonable starting price of $ 499.99 / £479.99 / AU$ 799.99 – certainly way below the $ 3,499 (or higher) you'll pay for the Apple Vision Pro. However, it seems an even more affordable Meta headset is on the way.

After teasing what's being called the Meta Quest 3 Lite earlier this month, VR Panda (via Android Authority) has posted a picture of the rumored device on social media. As you might have expected, it looks a lot like the Meta Quest 3.

There are some differences though – the passthrough cameras on the outside of the wearable have apparently been ditched, which presumably means augmented reality effects and any kind of hand tracking control are off the agenda.

The team at Android Authority reckons that further savings could come through the use of a more affordable processor. Savings are certainly going to have to be made somewhere, if Meta is going to manage to significantly undercut the price of the Quest 3.

AR/VR for less

See more

As with every leak, we should apply a certain level of caution before taking it at face value. Last year we did hear talk that a cheaper Meta Quest VR device was on the way, but there hasn't been an abundance of leaks and rumors about it – and of course company plans can always change when it comes to gadget launches.

The lack of any passthrough cameras would be a surprise, even if Meta is trying to save on production costs. The company has previously said that passthrough  is likely to be a “standard feature” on future headsets, so make of that what you will.

If you're wanting to strap a device to your head that mixes virtual reality and augmented reality, then the distinction between Apple and Meta is pretty clear – with the former company's offering costing seven times as much.

Affordability is a big selling point for Meta to emphasize, and it looks as though that gap will grow even more with the next device. As yet though, there's no indication about when a Meta Quest 3 Lite headset could see the light of day.

You might also like

TechRadar – All the latest technology news

Read More

Here’s more proof Apple is going big with AI this year

The fact that Apple is going to debut a new generative artificial intelligence (AI) tool in iOS 18 this year is probably one of the worst-kept secrets in tech at the moment. Now, another morsel has leaked out surrounding Apple’s future AI plans, and it could shed light on what sort of AI features Apple fans might soon get to experience.

As first reported by Bloomberg, earlier this year Apple bought Canadian startup DarwinAI, with dozens of the company’s workers joining Apple once the deal was completed. It’s thought that Apple made this move in an attempt to bolster its AI capabilities in the last few months before iOS 18 will be revealed, which is expected to happen at the company’s Worldwide Developers Conference (WWDC) in June.

Bloomberg’s report says that DarwinAI “has developed AI technology for visually inspecting components during the manufacturing process.” One of its “core technologies,” however, is making AI faster and more efficient, and that could be the reason Apple chose to open its wallet. Apple intends its AI to run entirely on-device, presumably to protect your privacy by not sharing AI inputs with the cloud, and this would benefit from DarwinAI’s tech. After all, Apple won’t want its flagship AI features to result in sluggish iPhone performance.

Apple’s AI plans

Siri

(Image credit: Unsplash [Omid Armin])

This is apparently just the latest move Apple has made in the AI arena. Thanks to a series of leaks and statements from Apple CEO Tim Cook, the company is known to be making serious efforts to challenge AI market leaders like OpenAI and Microsoft.

For instance, it’s been widely reported that Apple will soon unveil its own generative AI tool, which has been dubbed Ajax and AppleGPT during its development process. This could give a major boost to Apple’s Siri assistant, which has long lagged behind competitors such as Amazon Alexa and Google Assistant. As well as that, we could see generative AI tools debut in apps like Pages and Apple Music, rivaling products like Microsoft’s Copilot and Spotify’s AI DJ.

Tim Cook has dropped several hints regarding Apple’s plans, saying customers can expect to see a host of AI features “later this year.” The Apple chief has called AI a “huge opportunity” for his company and has said that Apple intends to “break new ground” in this area. When it comes to specifics, though, Cook has been far less forthcoming, presumably preferring to reveal all at WWDC.

It’s unknown whether Apple will have time to properly integrate DarwinAI’s tools into iOS 18 before it is announced to the world, but it seems certain it will make use of them over the coming months and years. It could be just one more piece of the AI puzzle that Apple is attempting to solve.

You might also like

TechRadar – All the latest technology news

Read More

YouTube TV refreshed UI makes video watching more engaging for users

YouTube is redesigning its smart TV app to increase interactivity between people and their favorite channels.

In a recent blog post, YouTube described how the updated UI shrinks the main video a bit to make room for an information column housing a video’s view counts, amount of likes it has, description, and comments. Yes, despite the internet’s advice, people do read the YouTube comments section. The current layout has the same column, but it obscures the right side of the screen. YouTube states in its announcement the redesign allows users to enjoy content “without interrupting [or ruining] the viewing experience.” 

Don’t worry about this becoming the new normal. TheVerge in their coverage states the full screen view will remain. It won’t be supplanted by the refresh or removed as the default setting. You can switch to the revamped interface at any time from within the video player screen. It’s totally up to the viewer how they want to curate their experience. 

Varying content

What you see on the UI’s column can differ depending on the type of content being watched. In the announcement, YouTube demonstrates how the layout works by playing a video about beauty products. Below the comments, viewers can check out the specific products mentioned in the clip and buy them directly.

Shopping on YouTube TV may appear seamless, however, TheVerge claims it’ll be a little awkward. Instead of buying items directly from a channel, you'll have to scan a QR code that shows up on the screen. From there, you will be taken to a web page where users will complete the transaction. We contacted YouTube to double-check, and a company representative confirmed that is how it’ll work.

Besides shopping, the far-right column will also display live scores and stats for sports games. It’ll be a part of the already existing “Views suite of features,” all of which can be found by triggering the correct on-screen filter.

The update will be released to all YouTube TV subscribers in the coming weeks. It won’t happen all at once so keep an eye out for the patch when it arrives.

Be sure to check out TechRadar's recommendations for the best TVs for 2024 if you're looking to upgrade.

You might also like

TechRadar – All the latest technology news

Read More

Apple’s Vision Pro successfully helps nurse assist in spinal surgery – and there’s more mixed-reality medical work on the way

In a fascinating adoption of technology, a surgical team in the UK recently used Apple’s Vision Pro to help with a medical procedure.

It wasn’t a surgeon who donned the headset, but Suvi Verho, the lead scrub nurse (also known as a theater nurse) at the Cromwell Hospital in London. Scrub nurses help surgeons by providing them with all the equipment and support they need to complete an operation – in this case, it was a spinal surgery. 

Verho told The Daily Mail that the Vision Pro used an app made by software developer eXeX to float “superimposed virtual screens in front of [her displaying] vital information”. The report adds that the mixed reality headset was used to help her prepare, keep track of the surgery, and choose which tools to hand to the surgeon. There’s even a photograph of the operation itself in the publication. 

Vision Pro inside surgery room

(Image credit: Cromwell Hospital/The Daily Mail)

Verho sounds like a big fan of the Vision Pro stating, perhaps somewhat hyperbolically, “It eliminates human error… [and] guesswork”. Even so, anything that ensures operations go as smoothly as possible is A-OK in our books.

Syed Aftab, the surgeon who led the procedure, also had several words of praise. He had never worked with Verho before. However, he said the headset turned an unfamiliar scrub nurse “into someone with ten years’ experience” working alongside him.

Mixed reality support

eXeX, as a company, specializes in upgrading hospitals by implementing mixed reality. This isn’t the first time one of their products has been used in an operating room. Last month, American surgeon Dr. Robert Masson used the Vision Pro with eXeX’s app to help him perform a spinal procedure. Again, it doesn’t appear he physically wore the headset, although his assistants did. They used the device to follow procedural guides from inside a sterile environment, something that was previously deemed “impossible.”

Dr. Masson had his own words of praise stating that the combination of the Vision Pro and the eXeX tool enabled an “undistracted workflow” for his team. It’s unknown which software was used. However, if you check the company’s website, it appears both Dr. Masson’s team and Nurse Verho utilized ExperienceX, a mixed reality app giving technicians “a touch-free heads up display” 

Apple's future in medicine

The Vision Pro’s future in medicine won’t just be for spinal surgeries. In a recent blog post, Apple highlighted several other medical apps harnessing visionOS  Medical corporation Stryker created myMako to help doctors plan for their patients’ joint replacement surgeries. For medical students, Cinematic Reality by Siemens Healthineers offers “interactive holograms of the human body”. 

These two and more are available for download off the App Store, although some of the software requires a connection to the developer’s platform to work. You can download if you want to, but keep in mind they're primarily for medical professionals.

If you're looking for a headset with a wider range of usability, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More