Windows 11 just received a new update and there’s some good news for those who were experiencing weird and buggy behavior with their taskbar.
Namely that the frustrating taskbar bug – where it can vanish, before reappearing – is now fixed with the cumulative update for June 2024, as Microsoft makes clear in the support document relating to the upgrade.
Microsoft notes: “This update addresses a known issue that affects the taskbar. It might briefly glitch or not respond. It might also disappear and reappear.”
When the problem became known, Microsoft was swift to act, and fired up a rollback for devices that had installed this preview update for May. This was implemented via a ‘Known Issue Rollback’ meaning Windows 11 users didn’t have to take any action installing another patch – the fix was put in place automatically (as Neowin spotted).
So, those who were worried about this bug carrying over to the June update – after all, May’s optional update is June’s cumulative update, but in testing – well, you needn’t fret. The problem is now fully resolved (or at least Microsoft says it is).
Useful new features are also part of the June update
What else is present in the new June update for Windows 11? It also delivers the long-awaited drag-and-drop functionality for File Explorer’s address bar, changes to Account Manager in the Start Menu, a refreshed Linked Devices page in the Settings app, built-in QR code generation for links, and many security-related tweaks.
We would always recommend that you make sure that you have Microsoft’s latest Patch Tuesday update installed, as these patches address the latest security risks and known exploitable vulnerabilities.
The Apple WWDC 2024 keynote is always one of the highlights of the tech calendar, and this year's edition was bigger than most. That's because, as widely predicted, Tim Cook used the occasion to reveal Apple Intelligence – arguably the biggest development in Apple-land since… well, the reveal of the Vision Pro last year.
As well as the big AI-related news, the nearly two-hour Apple event was absolutely packed with detail on everything from iOS 18 to the latest macOS to iPadOS, watchOS and tvOS.
You can check out our WWDC 2024 live blog for full info on everything that was announced, but if you want the highlights then here are the 13 biggest announcements from WWDC 2024.
1. Apple Intelligence is coming soon…
Let's start with the big one, then.
Apple Intelligence is Apple’s new family of AI features, which will thread its way through every Apple platform and even work with third-party apps. It'll be entirely free, and available on iOS 18, iPadOS 18 and macOS Sequoia via their respective betas, then fully rolled out later this year.
It approaches AI in a very Apple way, which means privacy first, and most features work on the device you're using, without sharing to the cloud. When you need more power, Apple has a Power Cloud Compute option for complex requests, but Apple’s cloud still focuses more on privacy than any AI we’ve seen so far.
Among the many features on offer you'll get access to generative writing, generative image creation, and third-party API tools, in addition to the massive upgrade coming to Siri (see below). Apple was a little late to the AI party, but it'll be fully up to speed soon.
Exciting though Apple Intelligence is, it will only be available on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac models using the Apple M1 chip or later.
This is understandable, given that it needs powerful Apple silicon to work, but will be disappointing for anyone who owns a standard iPhone 15, any iPhone 14 or earlier model, or indeed an Intel-based MacBook. Expect sales of the rumored iPhone 16 range to benefit considerably…
3. Siri got a huge update – and now comes with added ChatGPT
Apple's voice assistant has been treading water for years, but Siri will finally get the makeover it so badly needs in iOS 18. That includes a visual refresh, with the assistant now a glowing light around the edges of your iPhone's screen.
But Siri has also been given a much-needed brain transplant. It'll have 'on-screen awareness' to make it work better with apps – and if it can't answer a question, you can plug it into ChatGPT-4o's model for free, cloud-based wisdom. The bad news? The new Siri is powered by Apple Intelligence, which means that as stated above, you'll only get it on the latest iPhone 15 Pro models, or iPads and Macs that have at least an M1 chip. Still, it'll be a nice perk when we do upgrade.
4. Custom emojis with generative AI will ruin communication
Oh boy. If you thought emoji use was bad before, just wait for Apple’s new feature that enables you to just tell your iPhone/iPad/Mac what cool new emoji you want, and it’ll create it for you using generative AI – so there will truly be one for every occasion.
Some will be adorable, some will be surreal, some will be disturbing, obviously. Will we love this feature? Or, more likely, will it be the moment when… hmmm, how to describe this? 'Hey iPhone, make us an emoji of a man in a leather jacket jumping his jetski over a shark.'
5. iOS 18's new updates look pretty sweet
As expected, Apple revealed its next major software update, iOS 18, confirming that big changes are headed to core iPhone apps including Mail, Messages, Maps and Photos.
Mail, for instance, will soon be capable of categorizing your emails and providing easy-to-read digests, while the Photos app is being unified into a single view comprising a photo grid and a dates grid.
iOS 18 will soon let you react to messages using any emoji in the Messages app, and you'll be able to schedule messages to send at a convenient time in the future. Significant customization improvements are also coming to the Home Screen and Control Center.
We also found out the name of macOS 15: Sequoia. Despite the big reveal at the keynote, it wasn’t too much of a shock that Apple went for this. For a start, modern macOS releases have all been named after Californian landmarks or places – previous editions have been called Big Sur, Ventura, Monterey and Sonoma. Sequoia, named after a national park in Sierra Nevada, continues this tradition.
Internet sleuths also spotted ahead of WWDC that Apple has trademarked a number of potential names: Redwood, Grizzly, Mammoth, Pacific, Rincon, Farallon, Miramar, Condor, Diablo, Shasta… and Sequoia. As well as its name, we also found out that it’ll be coming out ‘this Fall’ (so September or October 2024), with a developer preview available right now.
As our Vision Pro review makes clear, Apple's mixed-reality headset is a special piece of kit that really has to be experienced. Thus far, it's only been available in the US – but that's changing now.
As of Thursday, June 13, customers in China, Hong Kong, Japan, and Singapore will be able to pre-order the Vision Pro, with devices shipping from Friday, June 28. Those in Australia, Canada, France, Germany, and the UK will have to wait a little longer, but will be able to pre-order from June 28, with devices available from July 12.
How much will it cost? Well, we know that in the UK it will start at £3,499, and based on the US price – which is also $ 3,499 – we'd expect it to retail at around AU$ 6,349.
8. visionOS 2 will turn your 2D images into Spatial Photos
Alongside a global launch for the hardware, Apple announced visionOS 2 on the software side. The standout feature is that the headset can now turn your flat pictures into spatial photos, using machine learning.
Spatial images have depth that makes them feel more like you’re viewing a memory than looking at a regular photo does, and this is a huge win for those of you with overflowing iCloud libraries.
When the visionOS 2 update rolls out later this year, other features you’ll unlock include travel mode being able to work on trains (alongside planes), while your Mac virtual display will get a lot bigger, with the max size being like having two 4K displays sitting side by side. There will also be new hand-gesture controls, which should allow you to quickly navigate to the settings menu, home view, and other useful tools.
The biggest cheer of the night came not for the Vision Pro or iOS 18, but from… the Calculator app on iPadOS 18. Yes, really.
In fairness, the iPad has never had a native Calculator app, with users instead having to make do with third-party options, a fact which has inspired more than a few memes at Apple's expense over the years.
The new Calculator app is more than just a scaled-up version of the iOS app, though. Extra features include a resizable window and a sidebar that lists recent calculations. But better still is the new Math Notes integration.
This works with the Apple Pencil, allowing you to write equations that will be solved immediately once you write an equals sign. You can then make changes to various elements of the equation and see how the results change in real-time, plus turn equations immediately into charts and more. It looks pretty impressive.
It also introduces a new Vitals app that can help users understand how well one’s body responds to and recover from stress. Meanwhile, the new Training Load score uses an algorithm to generate a score based on how well a person is responding to training, by harnessing metrics such as average heart rates and resting, combined with one’s age and weight data.
Apple hasn’t revamped watchOS that much, but the 11th iteration leans on evolution and should make wearing one of the best Apple Watches even better.
11. People will hear you way more clearly when calling from AirPods Pro 2
AirPods Pro 2 are getting a Voice Isolation feature, which promises to massively reduce wind noise and other loud sounds from the mic when you call someone using them. Given how good the noise cancellation on AirPods Pro 2 is, we’re looking forward to seeing how well this works. Other AirPods miss, sadly – it needs the mighty power of the Apple H2 chip, currently unique to AirPods Pro 2.
Unsurprisingly, tvOS wasn't exactly the main focus of the WWDC 2024 keynote, but it did get a few new features to enhance your Apple TV experience.
The most interesting of the tvOS 18 updates looks like InSight, the company’s own take on the X-Ray feature used by Amazon Prime Video, which displays onscreen info about actors, characters, and background music in movies and shows.
The Apple TV 4K’s Enhance Dialogue feature, meanwhile, is getting an AI boost to make voices sound clear across a range of devices, and subtitles will get a similar treatment to generate onscreen text when muting or scanning back through programs.
Last but not least, the Apple TV 4K will now be able to output images in the ultra-wide 21:9 aspect ratio format for better compatibility with 4K projectors.
Safe to say that iPhone owners have been jealous of Magic Eraser on the the Google Pixel range, and fortunately Apple is finally fixing this – and it's not just on iOS, but will be within Photos on iPadOS and macOS as well.
With a new Clean Up feature you'll be able to circle to intelligently remove an object or even a person from the background. Furthermore, search is getting much smarter within the Photos app, making it easier for you to find photos that you care about.
Google IO 2024 is approaching fast, with the big G's festival for Android 15, Wear OS 5, Android TV and more kicking off on May 14. And we now have an official schedule to give us some hints of the software (and maybe hardware) announcements in the pipeline.
The Google IO 2024 schedule (spotted by @MishaalRahman on X, formerly Twitter) naturally doesn't reveal any specifics, but it does confirm where we'll see some big new software upgrades.
The keynote, which will be well worth tuning into, will kick off at 10am PT / 5pm GMT on May 14 (which works out as 3am AEST in Australia). But the details of the follow-up sessions give us a taster of what will be shown off, including our first proper look at Wear OS 5.
That's confirmed in the 'Building for the future of Wear OS' session, which will help developers “discover the new features of Wear OS 5”. Considering the smartwatch platform appeared to be flirting with the Google Graveyard not long ago, that's good news. We'll presumably hear more about a release date at the event, and maybe even a Pixel Watch 3.
What else does the schedule reveal? Android 15 was always a shoo-in for this year's show, so it's no surprise that the OS will be covered alongside “generative AI, form factors” and more at Google IO 2024.
Thirdly, AI will naturally be a huge general theme, with Google Gemini a consistent thread across the event. Developers will discover “new ways to build immersive 3D maps” and how to make “next-gen AI apps with Gemini models”. Gemini will also power new apps for Google Chat and create new content from images and video, thanks to Google's multi-modal Gemini Pro Vision model.
Fans of Android Auto will also be pleased to hear that it'll likely get some upgrades, too, with one developer session titled “Android for Cars: new in-car experiences”. Likewise, Google TV and Android TV OS will get a mention, at the very least, with one session promising to show off “new user experience enhancements in Google TV and the latest additions to the next Android TV OS platform”.
Because Google IO 2024 is a developer conference, its sessions are all themed around software – but we'll almost certainly see lots of new hardware treats announced during the keynote, too.
This week, rumors about a refreshed Pixel Tablet (rather than a Pixel Tablet 2) suggested it could also make its bow at Google's conference. A Google Pixel Fold 2 is also on the cards, though we have also heard whispers of a Pixel 9 Pro Fold instead.
As always, we can expect some surprises too, like when Google teased its live-translation glasses at Google IO 2022, which then sadly disappeared in a cloud of vaporware. Let's hope its new ideas for this year's conference stick around a little longer.
It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.
Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.
1. It’s a big upgrade for Google’s old Find My Device network
The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers.
Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.
Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.
2. Google was waiting for Apple to add support to iPhones
The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.
The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled.
Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.
3. It will roll out globally
Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.
Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.
If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.
4. You’ll receive unwanted tracker alerts
Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.
For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.
You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information.
5. Chipolo and Pebblebee are launching new trackers for it soon
Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.
On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.
Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given.
6. It’ll work nicely with your Nest products
For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.
A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.
Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.
7. Headphones are invited to the tracking party too
Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.
Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe.
OpenAI CEO Sam Altman has revealed what the future might hold for ChatGPT, the artificial intelligence (AI) chatbot that's taken the world by storm, in a wide-ranging interview. While speaking to Lex Friedman, an MIT artificial intelligence researcher and podcaster, Altman talks about plans for GPT-4 and GPT-5, as well as his very temporary ousting as CEO, and Elon Musk’s ongoing lawsuit.
Now, I say GPT-5, but that’s currently its unofficial name used to refer to it, as it’s still being developed and even Altman himself alludes to not conclusively knowing what it’ll end up being named. He does give this somewhat cryptic quote about the nature of OpenAI’s upcoming release:
“… what’s the one big unlock? Is it a bigger computer? Is it a new secret? Is it something else? It’s all of these things together.”
He then follows that by stating that he and his colleagues think that what OpenAI does really well is “multiply 200 medium-sized things together into one giant thing.” He specifically confirms to Friedman that this applies “Especially on the technical side.” When Altman and Friedman talk about the leap from GPT-4 to GPT-5, Altman does say he’s excited to see the next GPT iteration “be smarter.”
What's on the horizon for OpenAI
Friedman asks Altman directly to “blink twice” if we can expect GPT-5 this year, which Altman refused to do. Instead, he explained that OpenAI will be releasing other important things first, specifically the new model (currently unnamed) that Altman spoke about so poetically. This piqued my interest, and I wonder if they’re related to anything we’ve seen (and tried) so far, or something new altogether. I would recommend watching the entire interview as it’s an interesting glimpse into the mind of one of the people leading the charge and shaping what the next generation of technology, specifically ChatGPT, will look like.
Overall, we can’t conclude much, and this interview suggests that what OpenAI is working on is pretty important and kept tightly under wraps – and that Altman likes speaking in riddles. That’s somewhat amusing, but I think people would like to know how large the advancement in AI we’re about to see is. I think Altman does have some awareness of people’s anxieties about the fact that we are very much in an era of a widespread AI revolution, and he does at least recognise that society needs time to adapt and process the introduction of a technological force like AI.
He seems like he’s aware on some level of the potential that AI and the very concept of artificial general intelligence (AGI) will probably overhaul almost every aspect of our lives and the world, and that gives me some reassurance. Altman and OpenAI want our attention and right now, they’ve got it – and it sounds like they’re cooking up something very special to keep it.
Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.
Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.
But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).
This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…
1. Gemini replaces Google Bard and Duet AI
In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.
Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.
But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.
To sum up, there are three main ways to access Google Gemini:
As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.
There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.
The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.
Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…
3. You may want to stick with Google Assistant (for now)
The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.
The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.
Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.
Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.
4. Gemini is a new way to quiz Google's other apps
Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more.
As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.
This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps.
Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.
5. The free version of Gemini has limitations
The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced.
This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.
This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.
Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One…
6. Gemini Advanced is tempting for Google One users
The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.
This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.
There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.
This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).
7. The Gemini app could take a little while to reach the UK and EU
While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.
Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”
While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.
Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.
Windows 11 users who are installing the latest update are having some serious issues, from many accounts.
Windows Latest reported that there are bugs in the preview update – so yes, this is an optional update, not something you have to install – that are causing major problems with Windows 11’s interface in one way or another.
For starters, with patch KB5034204, some users are apparently experiencing a glitch where File Explorer – the folders and files on the desktop – is becoming unresponsive. This can lead to the whole desktop going blank (all folders and icons disappearing) for a while, before returning to normal, we’re told. Others are reporting File Explorer crashing while shutting down their PC.
Windows Latest further details reports of icons like the Recycle Bin vanishing, taskbar icons not working, and even the Windows 11 taskbar itself going missing, as complained about on Reddit (plus this is a problem the tech site encountered itself).
The other issue folks seem to be experiencing with KB5034204 is that the update fails to install. There are complaints on Microsoft’s Feedback Hub that the installation process reaches 100%, so looks like it has finished, but then crashes out with a message mentioning missing files. Stop code errors (like ‘0x8007000d’) are also in evidence with these installation mishaps.
Analysis: Out of the frying pan…
Clearly, we need to take into account that this is a preview update, meaning that it’s still officially in testing, and optional patches like this aren’t installed unless you specifically ask for them. As with any pre-release software, you can expect problems, in other words.
Even so, you might want an optional update because it provides a fix for a bug you’re suffering with, and in the case of KB5034204, it resolves a couple of notable issues disrupting video chats and streaming audio (and a misbehaving Start menu, too, plus more besides).
However, in this case, you might swap one problem for another when installing this optional update, and possibly a worse glitch (the wonkiness caused with the Windows 11 interface outlined above seems pretty nasty).
That said, there is a solution (kind of) for the missing taskbar at least, which is to press the Windows key + X – apparently, that sees the bar come back, but its behavior may still be odd going by the various reports around this particular bug.
It’s disappointing to see installation failures popping up again with this preview update, mainly because this was a flaw in evidence with the January cumulative update. It seems that Microsoft hasn’t resolved this yet, then, and the fear is that it might still be present in the February update for Windows 11 (which this preview is an advance version of, as you may realize).
Windows 11’s Snipping Tool is set to get a handy feature to embellish screenshots, or at least it seems that way.
Leaker PhantomOfEarth discovered the new abilities in the app by tinkering with bits and pieces in version 11.2312.33.0 of Snipping Tool. As you can see in the tweet below, the functionality allows the user to draw shapes (and fill them with color) and lines.
Coming soon to Snipping Tool: you will be able to add shapes such as circles and arrows to images you are editing! pic.twitter.com/JaEGsSERhQJanuary 17, 2024
See more
That means you can highlight parts of screenshots by pointing with arrows – for an instructional step-by-step tutorial you’ve made with screen grabs, for example – or add different shapes as needed.
Note that this is not in testing yet, because as noted, the leaker needed to play with the app’s configuration to get it going. However, the hidden functionality does seem to be working fine, more or less, so it’s likely that a rollout to Windows 11 testers isn’t far off.
Analysis: A feature drive with core apps
While you could furnish your screenshots from Snipping Tool with these kinds of extras simply by opening the image in Paint, it’s handy to have this feature on tap to directly work on a grab without needing to go to a second app.
Building out some of the basic Windows 11 apps is very much becoming a theme for Microsoft of late. For example, recently Snipping Tool has been testing a ‘combined capture bar’ (for easily switching between capturing screenshots or video clips), and the ability to lift text straight from screenshots which is really nifty in some scenarios.
Elsewhere, core apps like Paint and Notepad are getting an infusion of AI (with Cocreator and a rumored Cowriter addition), and there’s been a lot of work in other respects with Notepad such as adding tabs.
We think these initiatives are a good line of attack for Microsoft, although there are always folks who believe that simple apps like Snipping Tool or Notepad should be kept basic, and advanced functionality is in danger of cluttering up these streamlined utilities. We get where that sentiment comes from, but we don’t think Microsoft is pushing those boundaries yet.
We've had quite the wait for the Apple Vision Pro, considering it was unveiled back in June at Apple's annual WWDC event. Yesterday we finally got the news that the Vision Pro will be going on sale on Friday, February 2, with preorders open on Friday, January 19 – and some other new bits of information have now emerged, alongside its first video ad (below).
As Apple goes into full sales mode for this pricey mixed reality headset, it's answering some of the remaining questions we had about the device, and giving us a better idea of what it's capable of. Considering one of these will cost you $ 3,499 (about £2,750 / AU$ 5,225) and up, you're no doubt going to want all of the details you can get.
1. Apple thinks it deserves to be in a sci-fi movie
Take a look at this brand new advert for the Apple Vision Pro and see how many famous movies you can name. There's a definite sci-fi angle here, with films like Back to the Future and Star Wars included, and Apple clearly wants to emphasize the futuristic nature of the device (and make strapping something to your face seem cool rather than nerdy).
If you've got a good memory then you might remember that one of the first adverts for the iPhone also made use of short clips cut from a multitude of films, featuring stars such as Marilyn Monroe, Michael Douglas, and Steve McQueen. Some 16 years on, Apple is once again using the power of the movies to push a next-gen piece of hardware.
2. The battery won't last for the whole of Oppenheimer
Speaking of movies, you're going to need a recharge if you want to watch all of Oppenheimer on the Apple Vision Pro. Christopher Nolan's epic film runs for three hours and one minute, whereas the Vision Pro product page (via MacRumors) puts battery life at 2.5 hours for watching 2D videos.
That's when you're watching a video in the Apple TV app, and in one of the virtual environments that the Vision Pro is able to conjure up. Interestingly, the product page text saying that the device could run indefinitely as long as it was plugged into a power source has now been quietly removed.
3. The software is still a work in progress
Considering the high price of the Apple Vision Pro, and talk of limited availability, this doesn't really feel like a mainstream device that Apple is expecting everyone to go out and buy. It's certainly no iPhone or Apple Watch – though a cheaper Vision Pro, rumored to be in the pipeline, could certainly change that dynamic somewhat.
With that in mind, the software still seems to be a work in progress. As 9to5Mac spotted in the official Vision Pro press release, the Persona feature is going to have a beta label attached for the time being – that's where you're represented in video calls by a 3D digital avatar that doesn't have a bulky mixed reality headset strapped on.
4. Here's what you'll be getting in the box
As per the official press release from Apple, if you put down the money for a Vision Pro then you'll get two different bands to choose from and wrap around your head: they are the Solo Knit Band and the Dual Loop Band, though it's not immediately clear what the differences are between them.
Also in the box we've got a light seal, two light seal cushions, what's described as an “Apple Vision Pro Cover” for the front of the headset, an external battery back, a USB-C charging cable, a USB-C power adapter, and the accessory that we've all been wanting to see included – an official Apple polishing cloth.
5. Apple could release an app to help you fit the headset
When it comes to fitting the Apple Vision Pro snugly to your head, we think that Apple might encourage buyers to head to a physical store so that they can be helped out by an expert. However, it would seem that Apple also has plans for making sure you get the best possible fit at home.
As spotted by Patently Apple, a new patent filed by Apple mentions a “fit guidance” system inside an iPhone app. It will apparently work with “head-mountable devices” – very much like the Vision Pro – and looks designed to ensure that the user experience isn't spoiled by having the headset badly fitted.
6. There'll be plenty of content to watch
Another little nugget from the Apple Vision Pro press release is that users will be able to access “more than 150 3D titles with incredible depth”, all through the Apple TV app. Apple is also introducing a new Immersive Video format, which promises 180-degree, three-dimensional videos in 8K quality.
This 3D video could end up being one of the most compelling reasons to buy an Apple Vision Pro – we were certainly impressed when we got to try it out for ourselves, and you can even record your own spatial video for playing back on the headset if you've got an iPhone 15 Pro or an iPhone 15 Pro Max.
Meta’s Connect developer conferences have been fairly humble these past couple of years as the company shifted to online events due to the pandemic. But for 2023, the tech giant returned to an in-person event and took some big swings.
The star of the show was undoubtedly the Quest 3. It features improved hardware running on the Snapdragon XR2 Gen 2 SoC (system on a chip), an in-depth mapping upgrade, and greater support for video games. The reveal was certainly impressive. However as the conference went on, it felt like the spotlight shifted to all the AI announcements.
We’ve known some of the AI models Meta has been developing for a while now, like its revamped chatbot to take on GPT-4. But as it turns out there was a lot more going on behind the scenes as the company showed off a slew of AI features coming to its messaging apps.
There is a lot to cover, so if you want to know about a specific topic, you can use the jump links at the top to head over to a particular section. Or you can read the whole thing as it happened.
Virtual Reality
1. Meta Quest 3
$ 499.99
Available for pre-order
Launches October 10
We finally get a look at the Meta Quest 3 VR headset after months of leaks. Compared to the Quest 2, this new model is 40 percent thinner thanks to the pancake lenses allowing for a slimmer design, according to company CEO Mark Zuckerberg. Each lens is able to output 4K resolution (2,064 x 2,208 pixels) per eye for the highest quality possible. The speakers are getting an upgrade too. They now have a “40 percent louder volume range than Meta Quest 2”.
All this will be powered by the Qualcomm Snapdragon XR2 Gen 2 chipset mentioned earlier, which is said to be capable of twice “the graphical performance.”
Also, the headset is paired up with two Touch Plus Controllers now boasting better haptic feedback for more immersive gaming. The Quest 3 is currently available for pre-order on Meta’s official website. Prices start at $ 499.99 for the 128GB model while the 512GB headset is $ 649.99. It ships out on October 10.
2. Better gaming
Xbox Cloud Gaming coming in December
No longer need a PC
Some titles will be in mixed reality
A large portion of Zuckerberg’s presentation was dedicated to gaming as Meta wants gamers to adopt its headset for a fresh, new experience. To enable this, Xbox Cloud Gaming will be accessible on the Quest 3 this December. This means you can play Halo Infinite or Minecraft on an immersive virtual screen. And the best part is you no longer need to connect to a gaming PC to run your favorite titles. Thanks to the Snapdragon chip, the headset is now powerful enough to run the latest games.
For greater interactivity, some titles like BAM! can be played on a table in your house through a mixed reality environment. The Quest 3 will display the board game in front of you while you still see the room around you.
3. Immersive environments
Will automatically map your room
Virtual objects appear
Can switch between immersive and blended spaces
Mixed reality is made possible due to the Quest 3’s “full-color passthrough capability and a depth sensor”. The device will scan a room, taking note of the objects in it in order to set up a mixed-reality space. This is all done automatically, by the way. Through this, virtual objects will appear in your house.
Besides video games, the mixed reality spaces can be used to establish your own immersive workout or meditation area. For basketball or MMA fans, you can get ring-side seats where you can watch your favorite teams or fighters duke it out as if you’re there. Double-tapping the headset on the side changes the view from an immersive perspective to a wide-angle shot where you can see everything.
Generative AI
4. Meta AI assistant
Powered by Bing Chat
Will be available on WhatsApp, Instagram, and more
Can access the internet
Mark Zuckerberg revealed Meta has entered a partnership with Microsoft allowing the former to use Bing Chat as the basis for their new in-app assistant called Meta AI. It works in much the same way. You can ask quick questions or engage with it in some light conversation.
What’s interesting is it’ll be available on Facebook, Instagram, Messenger, and WhatsApp. It will have access to the internet for displaying real-time information. Enabling this can backfire as it may cause the AI to hallucinate or come up with false information. To combat this, Meta states it carefully trained its AI to stay accurate.
It’s unknown when the assistant will launch officially; although we did ask. We should mention it will be available in beta on the upcoming second-generation Ray-Ban smart glasses which launches in October.
5. Multiple personalities
AI Assistant can have a persona
These persona can offer specific advice
Or be a source of entertainment
It seems Meta AI will have split personalities as it'll be possible to have it emulate a certain persona. Each one is based on a famous public figure. For example, Victor the fitness coach is based on basketball star Dwayne Wade. Seemingly, each persona will appear with a video of the celebrities in the corner. The video is connected to the AI and will emote according to the text.
The personas do get a little wacky. Rapper Snoop Dogg gave his likeness to be the Dungeon Master model guiding people through a choose-your-own-adventure text game. Others have a more practical use like the chef AI giving cooking advice.
6. Generating images
Emu can generate high quality images
Can be accessed through Instagram and WhatsApp
Can generate stickers in three seconds
Emu, or Expressive Media Universe, is Meta’s new image generation engine. Like others of its kind, Emu is capable of pumping out high-quality images matching a specific text prompt. However, it will do so in five seconds flat – or so Mark Zuckerberg claims. What’s unique about this engine is it will power content generation on Meta’s other apps like Instagram and WhatsApp.
On the two platforms, Emu will allow users to create their own stickers for group chats in about three seconds. Generating images will require you to enter a forward slash and then a prompt such as “/image a sailboat with infinite sails.” This technology is being used on Instagram to generate unique backgrounds and new filters.
7. AI Studio
User will be able to make their own AI
Sandbox kit will it easy to create models
Sandbox launches next year
Meta is opening the door for people to come in and make their AI via the AI Studio platform. Within the coming weeks, developers can get their hands on a new API that they can use to build their very own artificial personality. Non-programmers will get the opportunity to do the same through a company-provided sandbox. However, it’ll be a while until it sees the light of day as it won’t roll out until early 2024.
The tech giant explains that with this tech you can create your own NPCs (non-player characters) for Horizon Worlds.
Smart glasses
8. Next-gen Ray-Bans
$ 299
Available in 15 countries
Launches October 17
Near the end of his presentation, Mark Zuckerberg announced the next generation of Ray-Ban smart glasses now sporting better visual quality, better audio, and more lightweight body. On the corners of the frames will be two 12MP ultra wide camera lenses capable of recording 1080p video. It has 32GB of storage allowing you to store over 100 videos or 500 photos, according to Meta.
What’s more is it comes with a snazzy-looking leather charging case similar to the kind you get with a normal pair of Ray-Bans. With the case, the Ray-Ban smart glasses can last up to 36 hours on a single charge.
It’s currently available for pre-order for $ 299 in either Wayfarer brown or Headliner black. It launches October 17 in 15 countries, “include the US, Canada, Australia, and throughout Europe.”
9. Livestreaming
Can connect to Instagram for livestreaming
Touch control activate certain features
Meta is giving its next-gen smart glasses the ability to livestream directly on Instagram and Facebook. In the demonstration, a new glasses icon will appear on the app’s video recording section. Turning on the icon and double-tapping the side of the glasses will connect the device to the app so viewers can see what you’re seeing.
Additionally, tapping and holding the side of the frame lets you hear the latest comments out loud through their internal speakers. That way, streamers can stay in touch with their community.
This feature will be available when the updated Ray-Bans launch next month.
And that’s pretty much the entire event. As you can see, it was stacked. If you want to know more, be sure to check out TechRadar’s hands-on review of the Ray-Ban smart glasses.