Gaming browser Opera GX has augmented its AI assistant Aria with several new AI tools, including creating images, speaking out loud, summarizing conversations, and linking appropriately for the conversation.Aria’s new ability to generate images from text prompts leverages Google’s Imagen2 model. Users can generate up to 30 images per day, with the option to redo the image creation if unsatisfied. Beyond generating images, Aria has also gained the ability to understand and provide context for images uploaded by users. This allows users to upload an image and ask Aria questions about it.
Chatty Aria
The textual side of things has seen an upgrade as well with the new “Chat Summary” and “Links to Sources” features. As the name suggests, Chat Summary provides a concise recap of the conversation with Aria, helping users quickly review important points. This is particularly useful for lengthy interactions where users need to recall key details without scrolling through entire chat logs.
Meanwhile, the Links to Sources provides you with relevant links about the topics discussed with the AI. The idea is to help you delve deeper into subjects of interest, accessing additional information and verifying the AI’s responses. Such features are designed to make the chat interaction more comprehensive and resourceful.
Opera GX is a browser designed by Opera for gamers. with features like network bandwidth limiters to keep games uninterrupted, Twitch integration, and built-in gaming news feeds. Opera isn’t among the giants of browsers in terms of the number of users like Google Chrome or Mozilla Firefox, but it does have a loyal community interested in more niche innovations as well as privacy features. Opera GX tends to be ahead in offering new tools that may eventually become mainstream in any browser. as with these AI interface and content creation features.
This latest update reflects the ongoing evolution of AI in enhancing user experiences across various digital platforms. All of the new Aria features are available to all Opera GX users, now.
Anthropic's Claude AI chatbot has long been one of the best ChatGPT alternatives and now a big update has taken it to another level – including beating OpenAI's GPT-4o model in some industry standard benchmark tests.
Like Google Gemini, Claude is a family of three different AI models. The new Claude 3.5 Sonnet (which takes the baton from Claude 3 Sonnet) is the company's mid-tier AI model, sitting in between the Claude 3 Haiku (for smaller tasks) and the larger 'Opus' model, which is more like GPT-4.
This new Sonnet model now powers the browser-based Claude.ai and the Claude iOS app, both of which you can use right now for free. Like ChatGPT, there are Pro and Team subscriptions available for Claude that let you use it more intensely, but the free version gives you a taste of what it can do.
So what's new in Claude 3.5 Sonnet? The big improvements are its ability to handle vision-based tasks – for example, creating charts or transcribing handwritten notes – with Anthropic calling it “our strongest vision model yet”. The company also says that Sonnet “shows a marked improvement in grasping nuance, humor, and complex instructions”.
The upgraded Claude is also simply faster and smarter than before, edging out ChatGPT's latest GPT-4o model across many benchmarks, according to Anthropic. That includes setting new benchmark high scores for “graduate-level reasoning”, “undergraduate-level knowledge” and “coding proficiency”.
This means Claude could be a powerful new sidekick if you need help with creative writing, creating presentations and coding – particularly as it now has a new 'Artifacts' side window to help with refining its creations.
Ultimate homework assistant?
Another handy new feature in Claude 3.5 Sonnet is its so-called 'Artifacts' side window, which lets you see and tweak its visual creations without having to scroll back and forth through the chat.
For example, if you ask it to create a text document, graph or website design, these will appear in a separate window alongside your conversation. You can see an example of that in action in the video above, which shows off Claude's potential for creating graphs and presentations.
So how does this all compare to ChatGPT? One thing Claude doesn't have is a voice or audio powers – it's purely a text-based AI assistant. So if you're looking to chat casually with an AI assistant to brainstorm ideas, then ChatGPT remains the best AI tool around.
But Claude 3.5 Sonnet is undoubtedly a powerful new rival for text-based tasks and coding, edging out GPT-4o in benchmarks and giving us an increasingly well-rounded new option for both creative tasks and coding.
The headline AI battle might be ChatGPT vs Google Gemini vs Meta AI, but if you want a fast, smart AI sidekick to help with a variety of tasks, then it's well worth taking Claude 3.5 Sonnet for a spin in its browser version or iOS app.
One of Adobe Lightroom's most used editing tools, Content-aware Fill, just got a serious upgrade in the AI-powered shape of Generative Remove. The Adobe Firefly tool is branded “Lightroom's most powerful remove tool yet” and after a quick play ahead of its announcement, I'd have to agree.
Compared to Content-aware Fill, which will remain in Adobe's popular photo organizer and editor, Generative Remove is much more intelligent, plus it's non-destructive.
As you can see from the gif below, Generative Remove is used to remove unwanted objects in your image, plus it works a treat for retouching. You simply brush over the area you'd like to edit – whether that's removing a photo bomber or something as simple as creases in clothing – and then the new tool creates a selection of smart edits with the object removed (or retouch applied) for you to pick your favorite from.
If I was to use Lightroom's existing Content-aware Fill for the same image in the gif below and in the same way, or even for a much smaller selection, it would sample parts of the model's orange jacket and hair and place them in the selection. I'd then need to repeatedly apply the tool to remove these new unwanted details, and the new area increasingly becomes an artifact-ridden mess.
Put simply, Lightroom's existing remove tool works okay for small selections but it regularly includes samples of parts of the image you don't want. Generative Remove is significantly faster and more effective for objects of all sizes than Content-aware Fill, plus it's non-destructive, creating a new layer that you can turn on and off.
From professionals wanting to speed up their workflow to simply removing distant photo bombers with better results, Generative Remove is next-level Lightroom editing and it gives you one less reason to use Adobe Photoshop. It is set to be a popular tool for photographers of all skills levels needing to make quick remove and retouching edits.
Generative Remove is available now as an early access feature across all Lightroom platforms: mobile, desktop, iPad, web and Classic.
Adobe also announced that its Lens Blur tool is being rolled out in full to Lightroom, with new automatic presets. As you can see in the gif above, presets include subtle, bubble and geometric effects to bokeh. For example, speckled and artificial light can be given a circular shape with the Lens Blur bubble effect.
Lens Blur is another AI-tool and doesn't just apply a uniform strength blur to the background, but uses 3D mapping in order to apply a different strength of blur based on how far away objects are in the background, for more believable results.
It's another non-destructive edit, too, meaning that you can add to or remove from the selection if you're not happy with the strength of blur applied or if background objects get missed out first time around – for instance, it might mistake a lamp in the image above as a subject and not apply blur to it.
Having both Generative Remove and Lens Blur AI-tools to hand makes Lightroom more powerful than ever. Lens Blur is now generally available across the Lightroom ecosystem. Furthermore, there are other new tools added to Lightroom and you can find out more on the Adobe website.
AI just keeps getting smarter: another significant upgrade has been pushed out for ChatGPT, its developer OpenAI has announced, and specifically to the GPT-4 Turbo model available to those paying for ChatGPT Plus, Team, or Enterprise.
OpenAI says ChatGPT will now be better at writing, math, logical reasoning, and coding – and it has the charts to prove it. The release is labeled with the date April 9, and it replaces the GPT-4 Turbo model that was pushed out on January 25.
Judging by the graphs provided, the biggest jumps in capabilities are in mathematics and GPQA, or Graduate-Level Google-Proof Q&A – a benchmark based on multiple-choice questions in various scientific fields.
According to OpenAI, the new and improved ChatGPT is “more direct” and “less verbose” too, and will use “more conversational language”. All in all, a bit more human-like then. Eventually, the improvements should trickle down to non-paying users too.
More up to date
For example, when writing with ChatGPT, responses will be more direct, less verbose, and use more conversational language. pic.twitter.com/PHxrmCtpylApril 12, 2024
See more
In an example given by OpenAI, AI-generated text for an SMS intended to RSVP to a dinner invite is half the length and much more to the point – with some of the less essential words and sentences chopped out for simplicity.
Another important upgrade is that the training data ChatGPT is based on now goes all the way up to December 2023, rather than April 2023 as with the previous model, which should help with topical questions and answers.
It's difficult to test AI chatbots from version to version, but in our own experiments with ChatGPT and GPT-4 Turbo we found it does now know about more recent events – like the iPhone 15 launch. As ChatGPT has never held or used an iPhone though, it's nowhere near being able to offer the information you'd get from our iPhone 15 review.
The momentum behind AI shows no signs of slowing down just yet: in the last week alone Meta has promised human-like cognition from upcoming models, while Google has made its impressive AI photo-editing tools available to more users.
Windows 11 could conceivably get what surely everyone would regard as an unwelcome addition, or at least a very controversial change in terms of a potential new button for the taskbar that’s been uncovered in the innards of the desktop OS.
Apparently, Microsoft might just be mulling a ‘recommended’ button for the taskbar, and the theory is that it could surface various suggestions and thinly veiled adverts.
A new button is coming to the Windows 11 Taskbar right alongside system ones like Task View, Widgets, etc. It’s called “Recommended” & has all strings stripped from production, guess the UI team doesn’t want people to know. Concerned about recommendations becoming this integral😬 pic.twitter.com/XnvPhcGhvPApril 9, 2024
See more
The workings for such a button were discovered by well-known Microsoft leaker Albacore on X (formerly Twitter).
As Albacore makes clear, the button has had all its related strings (in the background) stripped from production builds, as if Microsoft’s team working on the interface wants to keep this as low-profile on the radar as possible.
As the leaker points out, the worry is that Microsoft is really thinking about making ‘suggestions,’ or nudges, recommendations, or whatever you want to call them, an integral part of the desktop, with a whole dedicated button on the taskbar.
Albacore notes that the description of the button is that it ‘controls visibility of recommendations on the taskbar’ and it’s filed under the term ‘taskbar sites,’ so the leaker theorizes that perhaps we could get website suggestions right on the taskbar, with the button’s icon changing to be the favicon of any given recommended site.
We’d further guess that maybe the idea would be to make these context-sensitive, so suggestions given would depend on what you’re doing in Windows 11 at the time – but that really is just guesswork.
Analysis: Paying twice for Windows 11 isn’t fair
As Albacore observes, we can hope that this might just be a piece of work from times gone past which has been abandoned, but references to it are still hanging around in the background of Windows 11. It’s entirely possible nothing will come of this, in short, and even if Microsoft is currently exploring the idea, it might ditch the button before it even comes to testing.
Granted, even if a recommended taskbar button is realized, we’d assume that Windows 11 will come with the option to turn it off – but it’s still a worrying hint about the direction Microsoft is at least considering here with a future update. A dedicated button like this would be a huge move in the direction of what might be termed soft advertising (or nudging).
Sadly, a further recent development as highlighted by another leaker on X, PhantomOfEarth, is that the ‘Recommended’ section in Windows 11’s Start menu could be getting something called promoted apps.
Looks like the Start menu’s Recommended section will be getting app promotions, similar to suggested apps in Start in Windows 10. This can be toggled off from Settings (Show recommendations for tips, app promotions, and more). pic.twitter.com/zYYnTKs9qwApril 9, 2024
See more
These would be apps Microsoft is actively promoting – there’s no bones about the advertising here, this isn’t badging or nudging – and again, it’s a dangerous move that very much runs the risk of annoying Windows 11 users. (Albeit it can be switched off – and remember, this is only in testing so far).
Given all this, we very much get the feeling that advertising-focused recommendations along these lines is something Microsoft is seriously considering doing more of. And given the past history of the software giant, that’s not surprising.
If you recall, recommended websites in the Start menu has long been a controversial topic – Microsoft previously toyed with the idea, abandoned it, but then brought it back in again last year to the disbelief of many folks (ourselves included).
As we’ve discussed in-depth elsewhere, the pushy advertising around Microsoft’s Edge browser and Bing search has been taken to new and unacceptable levels in recent times.
How about we abandon this line of thinking entirely, Microsoft? Just stop with the incessant promotion of your own services, or indeed possibly third-party services or websites, within Windows 11. This is an operating system we, the users, pay for – so we shouldn’t have to suffer adverts in various parts of the Windows interface.
Either make Windows completely free and ad-supported, or charge for it, with no ads, suggestions, nudges, or other promotional tomfoolery to be seen anywhere in the OS. Or give us a choice of either route – but don’t make us pay twice for Windows 11, once with an initial lump sum fee to buy the OS, and then again with further ongoing monetization by way of a constant drip-feed of ads here, there and everywhere.
It looks like Google Drive could finally get a dark mode option for its web version, meaning perusing documents could become a lot easier on the eye for people who like their web pages muted rather than a searing while.
This information comes courtesy of 9to5Google, which reports that one of its Google accounts received an update that prompts users to try out a “New Dark mode” so that they can “enjoy Drive in the dark”. The option to trigger this dark mode is reportedly under the ‘Appearance’ option in the Settings menu of Drive, but I’ve not seen this in either my personal Drive or my workspace Drive.
However, from the images 9to5Google provided, it looks like the dark mode in Drive is rolling out bit by bit, and will be a fairly straightforward integration of the mode that one can find in Android, Chrome and other Google apps. No icons are changed in terms of design or color, rather the background switches from white to black, with text flipping to white – all fairly standard.
There’s some difference in shading between the inner portion of Drive, where one will find documents and files, compared to the sidebar and search bar; the former is black, while the latter is slightly grey in tone.
Is this a huge deal? Not really, but for people who work late into the evening, the ability to switch from light mode to dark can be a blessing on tired eyes. And having a dark mode can offer a more pleasant experience for some people in general, regardless of the time of the day.
I’m definitely up for more dark mode options in Google services and beyond. Where once I thought dark mode was overhyped, I started using it on some of the best Android phones and my iPhone 15 Pro Max and haven't really looked back – it makes scrolling through various apps in bed more comfortable, though common sense would say you’re better of putting your phone down when in bed and picking up a book instead.
My hope is that by bringing dark mode Drive, Google will better integrate dark options into more of its apps and services, especially in Gmail, which has a dark mode but won’t apply it to actual emails when using the web versions, which is jarring. So fingers crossed for a more ubiquitous dark mode from Google.
YouTube is redesigning its smart TV app to increase interactivity between people and their favorite channels.
In a recent blog post, YouTube described how the updated UI shrinks the main video a bit to make room for an information column housing a video’s view counts, amount of likes it has, description, and comments. Yes, despite the internet’s advice, people do read the YouTube comments section. The current layout has the same column, but it obscures the right side of the screen. YouTube states in its announcement the redesign allows users to enjoy content “without interrupting [or ruining] the viewing experience.”
Don’t worry about this becoming the new normal. TheVerge in their coverage states the full screen view will remain. It won’t be supplanted by the refresh or removed as the default setting. You can switch to the revamped interface at any time from within the video player screen. It’s totally up to the viewer how they want to curate their experience.
Varying content
What you see on the UI’s column can differ depending on the type of content being watched. In the announcement, YouTube demonstrates how the layout works by playing a video about beauty products. Below the comments, viewers can check out the specific products mentioned in the clip and buy them directly.
Shopping on YouTube TV may appear seamless, however, TheVerge claims it’ll be a little awkward. Instead of buying items directly from a channel, you'll have to scan a QR code that shows up on the screen. From there, you will be taken to a web page where users will complete the transaction. We contacted YouTube to double-check, and a company representative confirmed that is how it’ll work.
Besides shopping, the far-right column will also display live scores and stats for sports games. It’ll be a part of the already existing “Views suite of features,” all of which can be found by triggering the correct on-screen filter.
The update will be released to all YouTube TV subscribers in the coming weeks. It won’t happen all at once so keep an eye out for the patch when it arrives.
Be sure to check out TechRadar's recommendations for the best TVs for 2024 if you're looking to upgrade.
I finally look slightly less creepy in my Apple Vision Pro mixed reality headset. Oh, no, I don't mean I look less like an oddball when I wear it but if you happen to call me on FaceTime, you'll probably find my custom Persona – digital Lance – a little less weird.
While Apple Vision Pro hasn't been on the market very long and the $ 3,499 headset is not owned in iPhone numbers (think tens of thousands, not millions) this first big visionOS update is important.
I found it under Settings when I donned the headset for the first time in a week (yes, it's true, I don't find myself using the Vision Pro as often as I would my pocketable iPhone) and quickly accepted the update. It took around 15 minutes for the download and installation to complete.
VisionOS 1.1 adds, among other things, enterprise-level Mobile Device Management (MDM) controls, closed captions and virtual keyboard improvements, enhanced Home View control, and the aforementioned Persona improvements.
I didn't test all of these features, but I couldn't wait to try out the updated Personas. Despite the update, Personas remains a “beta” feature. visionOS 1.1 improves the quality of Personas and adds a hands-free creation option.
Before we start, here's a look at my old Vision Pro Persona. Don't look away.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Personas are Vision Pro's digital versions of you that you can use in video conference calls on FaceTime and other supported platforms. The 3D image is not a video feed of your face. Instead, Vision Pro creates this digital simulacrum based on a Spatial Photography capture of your face. Even the glasses I have on my Persona are not real.
During my initial Vision Pro review, I followed Apple's in-headset instructions and held the Vision Pro in front of my face with the shiny glass front facing me. Vision Pro's voice guidance told me to slowly look left, right, up, and down, and to make a few facial expressions. All this lets the stereo cameras capture a 3D image map of my face.
Because there are also cameras inside the headset to track my eyes (and eyebrows) and a pair of cameras on the outside of the headset that points down at my face and hands, the Vision Pro can, based on how I move my face (and hands), manipulate my digital persona like a puppet.
There's some agreement that Apple Vision Pro Personas look a lot like us but also ride the line between reality and the awful, uncanny valley. This update is ostensibly designed to help with that.
Apple, though, added a new wrinkle to the process. Now I could capture my Persona “hands-free” which sounds great, but means putting Vision Pro on a table or shelf and then positioning yourself in front of the headset. Good luck finding a platform that's at the exact right height. I used a shelf in our home office but had to crouch down to get my face to where Vision Pro could properly read it. On the other hand, I didn't have to hold the 600g headset up in front of my face. Hand capture still happens while you're wearing the headset.
Image 1 of 3
Image 2 of 3
Image 3 of 3
It took a minute or so for Vision Pro to build my new Persona (see above). The result looks a lot like me and is, in my estimation, less creepy. It still matches my expressions and hand movements almost perfectly. Where my original Persona looked like it lacked a soul, this one has more warmth. I also noticed that the capture appears more expansive. My ears and bald head look a little more complete and I can see more of my clothing. I feel like a full-body scan and total Persona won't be far behind.
This by itself makes the visionOS 1.1 update worthwhile.
Other useful feature updates include the ability to remove system apps from the Home View. To do so, I looked at an app, in this case, Files, and pinched my thumb and forefinger together until the “Remove App” message appeared.
Apple also says it updated the virtual keyboard. In my initial review, I found this keyboard one of the weakest Vision Pro features. It's really hard to type accurately on this floating screen and you can only use two fingers at a time. My accuracy was terrible. In the update, accuracy and the AI that guesses what you intended to type appears somewhat improved.
Overall, it's nice to see Apple moving quickly to roll out features and updates to its powerful spatial computing platform. I'm not sure hands-free spatial scanning is truly useful, but I can report that my digital persona will no longer send you screaming from the room.
Microsoft has told us that it’s working on embedding artificial intelligence (AI) across a range of products, and it looks like it meant it, with the latest reports suggesting a more fleshed-out ‘AI Explorer’ feature for Windows 11.
Windows Central writes that AI Explorer will be the major new feature of an upcoming Windows 11 update, with Microsoft rumored to be working on a new AI assistance experience that’s described as an ‘advanced Copilot’ that will offer an embedded history and timeline feature.
Apparently, this will transform the activities you do on your PC into searchable moments. It’s said that this AI Explorer will be able to be used in any app, enabling users to search conversations, documents, web pages, and images using natural language.
That promises a lot, implying you’ll be able to make requests like the following that Windows Central gives:
“Find me that list of restaurants Jenna said she liked.”
“Find me that thing about dinosaurs.”
The advanced Copilot should then present everything it deems relevant – including every related word, phrase, image, and topic it can pull. It’s not clear if this means bringing up results from users' data stored locally on their PC or the internet (or a combination, as we see in Windows 11's Search box). I personally would prefer it if AI Explorer kept to just searching local files stored on a device's hard drive for privacy reasons, or at least give us the option to exclude internet results.
The feature could also offer up suggestions for things you can do based on what you currently have on your screen. For instance, if you’re viewing a photo, you might see suggestions to remove the background in the Photos app.
When we except more information
Rumors suggest that on March 21 there will be an announcement for the Surface Laptop 6 and Surface Pro 10, which are being hailed as Microsoft’s first real “AI PCs,” and will offer a range of features and upgrades powered by Microsoft’s next-gen AI tools. Sources say that these will go head-to-head with rivals like the iPad Pro and MacBook Pro in terms of efficiency and performance.
According to Neowin, we can look forward to the official launch of these PCs in April and June, but the AI features aren’t expected to be included right away. They’re forecasted to be added in the second half of the year, so the first of these shipped PCs will be pretty much like presently existing PCs running Windows 11 with some flashy hardware upgrades. It also seems like AI Explorer is specifically intended for these new machines, even if not right away, and existing device users won’t be able to use it.
It sounds like we’ll have to continue to watch for more information from Microsoft, especially as it’s not clear what exactly to expect on March 21, but it’s a lot of hype and excitement that I hope it can fulfill. Copilot’s present form is generally thought to be underwhelming and somewhat disappointing, so Microsoft has a lot to deliver if it wants to impress users and show them that it’s leading the pack with generative AI.
If you think that Windows 11’s File Explorer could be better, you’re not alone – and there’s a popular third party alternative, the Files app. The Files app (which despite its name, has no relation to Microsoft’s own File Explorer) just got an upgrade that makes it an even better tool for navigating your file systems, with the latest version of the app allowing users to navigate big folders more easily.
The Files app update 3.2 brings user interface (UI) improvements like a list view layout for files and folders, the capability to edit album covers of media files via folder properties, and support for higher quality thumbnails. Along with UI improvements, users can also expect many fixes and general improvements.
According to Windows Central, the Files app’s occasional instability while handling large file folders was one of the biggest user complaints with it and this update addresses that, too. The app should now be more functional when users attempt to use it with bigger file folders.
How the Files app measures up as a file explorer
Windows Central does state that it doesn’t think the Files app is just ready to completely replace the default Windows Files Explorer, but that “it can be a powerful and useful companion app.” It offers unique features that File Explorer itself doesn’t offer and, to many users, it’s got a sleeker look. This app is available for both Windows 10 and Windows 11, but the app’s performance can vary from system to system. Window Central writes of its own investigation of the File app’s performance and it does report that the app has issues with performance and stability on some PCs. You can check the full change log of what Files version 3.2 delivers if you’d like to know more.
Many users would like to see Windows’ old File Explorer include many of the File app’s features, and maybe Microsoft is watching. It recently released its own proprietary PC Cleaner app, a system cleaner tool that offers lots of the tools of popular paid third-party system cleaners for free. Also, Microsoft’s been at the receiving end of some heat both from industry professionals and competitors, as well as regulators in the European Union with its recent introduction of the Digital Markets Act (DMA). Offering tools like PC Cleaner and a souped-up File Explorer could be a way for it to win back some user trust and goodwill.
The existence of third-party apps like this is good for users two-fold because it can motivate first-party developers to improve their products faster, and it also gives users more choice over how they use their devices. The Files app looks like it sees regular updates and improvements, and definitely sounds like it could be worth users’ while given that it has no malware issues and if you get good performance upon installing it.
If you’d like to try out Files for yourself, bear in mind that it isn’t free: the app comes with a one-time charge of $ 8.99/£7.49, although thankfully there aren’t any subscription fees. You can download it directly from the Microsoft Store.