Google I/O 2024 will take place on May 14 – here’s what to expect

The big day has been set: Google I/O 2024 will kick off on Tuesday, May 14 at 10am PST (1pm ET / 6pm BST) and continue into the following day.

Assuming history repeats itself, the keynote will be hosted by CEO Sundar Pichai at the Shoreline Amphitheatre up in Mountain View, California. That’s where last year’s event took place and the year before that. It’ll be broadcast in front of a live studio audience, and of course everyone will be able to watch the event as it unfolds via livestream.

No one knows what will be revealed at Google I/O 2024. The tech giant suddenly dropped the news out of blue after posting an interactive puzzle game on its website. But despite the limited information, we can speculate about what we might see at the event, because the company has been dropping hints these past few months or so. 

Potential Gemini updates 

The most obvious pick here is artificial intelligence. Even though we’re only about a quarter in, 2024 has been a big year for Google AI. We saw the launch of the Gemini models, the brand’s very own LLM, as well as the rebranding of several other AIs under the Gemini moniker. Expect to see multiple demonstrations of what the tech will be able do in the near future. We could also find out more information on the mysterious Gemma, which is slated to be the open-source version of big brother Gemini.

It’s possible Pichar, or one of the hosts, will talk about improving their AI’s performance. If you’re not aware, Gemini has had some issues lately regarding, shall we say, inaccurate depictions of ethnic groups. Plus, hallucinations remain a problem.

Google Gemini AI

(Image credit: Google)

New hardware and Android 15's debut

When it comes to hardware, Google I/O 2024 will most likely see the debut of the midrange Pixel 8A. I/O 2023 saw the reveal of the Pixel 7a, so it makes sense that the company will repeat the trick with its successor. 

Recent leaks claim the smartphone will run on the Tensor 3 chipset and that the Pixel 8a will be a little bigger than the previous generation. Certain aspects of the Pixel 8a's potential design do concern us, like the large bezels around the display.

We should also expect to see the full debut of Android 15. Mid-February saw the launch of the Android 15 developer preview, giving the world its first opportunity to get its hands on the upcoming OS. Very little is known about the OS, but we are expecting to see lock screen widgets make their long-awaited return plus the ability to save pairs of apps, among other things. 

Google Pixel 8 review back angled case

(Image credit: Future | Alex Walker-Todd)

We're almost certain to see new AI developments, the Pixel 8a and Android 15 at Google I/O 2024, but elsewhere we're very much in speculation territory. 

For instance, we could see the reveal of new hardware like the Pixel Watch 3 or something, but don’t hold your breath. As our sister site Tom's Guide points out, the Pixel Watch 2 wasn’t announced at I/O 2023; it was instead unveiled during the Made By Google event in October. 

Same goes for the Pixel Tablet 2. The company is probably holding onto that for another day. If anything, I/O 2024 will feature smaller changes to other Google products. New Workspace tools, new Android 14 features, things of that nature. Nothing too crazy. It’s going to be Gemini’s day in the sun.

Online registration for the event is open and free for everyone. It lets you stay up-to-date on the schedule and what content will show up. Be aware that registering will require you to make a developer profile for Google, though.

There's still two month away. In the meantime, check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Apple Vision Pro update makes Personas less creepy and can take the creation process out of your hands

I finally look slightly less creepy in my Apple Vision Pro mixed reality headset. Oh, no, I don't mean I look less like an oddball when I wear it but if you happen to call me on FaceTime, you'll probably find my custom Persona – digital Lance – a little less weird.

While Apple Vision Pro hasn't been on the market very long and the $ 3,499 headset is not owned in iPhone numbers (think tens of thousands, not millions) this first big visionOS update is important.

I found it under Settings when I donned the headset for the first time in a week (yes, it's true, I don't find myself using the Vision Pro as often as I would my pocketable iPhone) and quickly accepted the update. It took around 15 minutes for the download and installation to complete.

VisionOS 1.1 adds, among other things, enterprise-level Mobile Device Management (MDM) controls, closed captions and virtual keyboard improvements, enhanced Home View control, and the aforementioned Persona improvements.

I didn't test all of these features, but I couldn't wait to try out the updated Personas. Despite the update, Personas remains a “beta” feature. visionOS 1.1 improves the quality of Personas and adds a hands-free creation option.

Before we start, here's a look at my old Vision Pro Persona. Don't look away.

Image 1 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)
Image 2 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)
Image 3 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)

Personas are Vision Pro's digital versions of you that you can use in video conference calls on FaceTime and other supported platforms. The 3D image is not a video feed of your face. Instead, Vision Pro creates this digital simulacrum based on a Spatial Photography capture of your face. Even the glasses I have on my Persona are not real.

During my initial Vision Pro review, I followed Apple's in-headset instructions and held the Vision Pro in front of my face with the shiny glass front facing me. Vision Pro's voice guidance told me to slowly look left, right, up, and down, and to make a few facial expressions. All this lets the stereo cameras capture a 3D image map of my face.

Because there are also cameras inside the headset to track my eyes (and eyebrows) and a pair of cameras on the outside of the headset that points down at my face and hands, the Vision Pro can, based on how I move my face (and hands), manipulate my digital persona like a puppet.

There's some agreement that Apple Vision Pro Personas look a lot like us but also ride the line between reality and the awful, uncanny valley. This update is ostensibly designed to help with that.

Apple Vision Pro 1-1 update

Scanning my face for my new Persona using the hands-free mode. (Image credit: Future)

Apple, though, added a new wrinkle to the process. Now I could capture my Persona “hands-free” which sounds great, but means putting Vision Pro on a table or shelf and then positioning yourself in front of the headset. Good luck finding a platform that's at the exact right height. I used a shelf in our home office but had to crouch down to get my face to where Vision Pro could properly read it. On the other hand, I didn't have to hold the 600g headset up in front of my face. Hand capture still happens while you're wearing the headset.

Image 1 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)
Image 2 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)
Image 3 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)

It took a minute or so for Vision Pro to build my new Persona (see above). The result looks a lot like me and is, in my estimation, less creepy. It still matches my expressions and hand movements almost perfectly. Where my original Persona looked like it lacked a soul, this one has more warmth. I also noticed that the capture appears more expansive. My ears and bald head look a little more complete and I can see more of my clothing. I feel like a full-body scan and total Persona won't be far behind.

This by itself makes the visionOS 1.1 update worthwhile.

Apple Vision Pro vision 1-1

Apple Vision Pro vision 1.1 remove system apps from Home View (Image credit: Future)

Other useful feature updates include the ability to remove system apps from the Home View. To do so, I looked at an app, in this case, Files, and pinched my thumb and forefinger together until the “Remove App” message appeared.

Apple also says it updated the virtual keyboard. In my initial review, I found this keyboard one of the weakest Vision Pro features. It's really hard to type accurately on this floating screen and you can only use two fingers at a time. My accuracy was terrible. In the update, accuracy and the AI that guesses what you intended to type appears somewhat improved.

Overall, it's nice to see Apple moving quickly to roll out features and updates to its powerful spatial computing platform. I'm not sure hands-free spatial scanning is truly useful, but I can report that my digital persona will no longer send you screaming from the room.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini’s new Calendar capabilities take it one step closer to being your ultimate personal assistant

Google’s new family of artificial intelligence (AI) generative models, Gemini, will soon be able to access events scheduled in Google Calendar on Android phones.

According to 9to5Google, Calendar events were on Gemini Experiences Senior Director of Product Management at Google Jack Krawczyk’s “things to fix ASAP” list for what Google would be working to add to Gemini to make it a better-equipped digital assistant. 

Users who have the Gemini app on an Android device can now expect Gemini to respond to voice or text prompts like “Show me my calendar” and “Do I have any upcoming calendar events?” When 9to5Google tried this the week before, Gemini responded that it couldn’t fulfill those types of requests and queries – which was particularly noticeable as those kinds of requests are pretty commonplace with rival (non-AI) digital assistants such as Siri or Google Assistant. However, when those same prompts were attempted this week, Gemini opened the Google Calendar app and fulfilled the requests. It seems that if users would like to enter a new event using Gemini, you need to tell it something like “Add an event to my calendar,” to which it should then prompt the user to fill out the details manually by using voice commands. 

Google Calendar

(Image credit: Shutterstock)

Going all in on Gemini

Google is clearly making progress to set up Gemini as its proprietary all-in-one AI offering (including as a digital assistant, replacing Google Assistant in the future). It’s got quite a few steps before it manages that, with users asking for features like the ability to play music or edit their shopping lists via Gemini. Another significant hurdle for Gemini to clear if it wants to become popular is that it’s only available in the United States for now. 

The race to become the best AI assistant has gotten a little bit more intense recently between Microsoft with Copilot, Google with Gemini, and Amazon with Alexa. Google did recently make some pretty big strides in its ability to compress the larger Gemini models so it could run on mobile devices. The capabilities of these more complex models sound like they can give Gemini’s capabilities a major boost. Google Assistant is pretty widely recognized and this is another feather in Google’s cap. I feel hesitant about placing a bet on any single one of these digital AI assistants, but if Google continues at this pace with Gemini, I think its chances are pretty good.

You might also like

TechRadar – All the latest technology news

Read More

TikTok is now on Apple Vision Pro, ready to take over your view and eat up your gestures

TikTok has had a big impact on the world of music since it was launched back in 2016, and now it’s set to make its presence felt in the world of VR with a new native app for the Apple Vision Pro. Is there anything that TikTok can’t do?

In January, Ahmad Zahran, Product Leader at TikTok, revealed that a Vision Pro app was in the works, saying his team had “designed a new TikTok experience for the Apple Vision Pro”. Its reimagined interface takes you out of TikTok in Safari – which used to be the only way to access the platform on the Vision Pro – and into a new app version that’s designed for the Vision Pro’s visionOS platform and takes full advantage of the headset’s visual layout. 

Similar to the design of its iOS and Android apps, TikTok for visionOS has a vertical layout and includes the usual ‘Like’, ‘Comment’, ‘Share’, and ‘Favorite’ icons. What sets TikTok’s visionOS app apart from its iOS and Android versions is its expanded interface designed for the Vision Pro’s widescreen view.

TikTok user interface on Apple Vision Pro

(Image credit: TikTok)

When you tap the icons in the navigation bar they appear as floating panes to the right of your ‘For You’ page without interrupting the main video display, giving you a better view of comment sections and creator profiles. Better yet, the app is also compatible with Vision Pro’s Shared Space tool, allowing you to move TikTok to a different space in your headset view so that you can open other apps. 

If you really want to reap the benefits of using TikTok in the Vision Pro, you can immerse yourself even further by viewing content in the headset’s integrated virtual environments – so you could enjoy your favorite clips on the surface of the Moon if that’s your thing. 

If you thought TikTok was ubiquitous and immersive now, just wait –  it’s already far too easy to get lost in the endless feed you’re presented with in your phone, never mind having it take over the majority of your central view in a headset. 

There is one thing missing from the TikTok Vision Pro app: the ability to capture and create new videos. 

TikTok has also beaten Netflix and YouTube to the punch by arriving on the Vision Pro. While Netflix has no plans to launch a Vision Pro app right now, YouTube recently announced the app Juno – a service that lets you browse YouTube videos specifically for Apple’s ‘latest and greatest device’. 

@techradar

♬ Papaya – Pastel

You might also like

TechRadar – All the latest technology news

Read More

Here’s how Apple is planning to take on ChatGPT

Apple may be lagging behind when it comes to generative AI tools such as ChatGPT and Google Bard, but it seems determined to catch up as soon as possible – and we just got a better idea of exactly how it's going to do that.

According to The New York Times, Apple is hoping to strike a deal with news publishers, to get access to their archives of content. AI models developed by Apple could then be trained on the vast amounts of written material in those archives.

The report says that “multi-year deals” worth “at least $ 50 million” are on the table, although it sounds as though none of the negotiations have reached a conclusion as of yet. Apple, as you would expect, has refused to comment.

As per the NYT, the heavyweight publishers involved in the talks include Condé Nast (responsible for outlets such as Vogue and The New Yorker), IAC (which runs People, The Daily Beast and Better Homes and Gardens), and NBC News.

Copy rights and wrongs

These deal rumors highlight a core part of how Large Language Models (LLMs) like ChatGPT's GPT-4 and Bard's Gemini work. They analyze huge amounts of text to learn to be able to produce convincing sentences of their own.

AI companies have been rather circumspect about where they've got the data that their models are trained on, but a vast web scraping operation is no doubt involved somewhere. In other words, if you've written something that's on the internet, it's probably been used to help train an AI.

The likes of OpenAI have promised to defend businesses who use AI models against copyright claims – a sure sign that these developers of artificial intelligence engines know that they're not on the firmest of ground when it comes to intellectual property issues.

To Apple's credit, it seems the company is attempting to reimburse writers and publishers for use of their articles, rather than just taking first and asking permission later. Expect to hear more from Apple on AI during the course of 2024.

You might also like

TechRadar – All the latest technology news

Read More

Students take note: Windows 11 update reportedly has a bug that’s taking down Wi-Fi at universities

Windows 11 just received a new cumulative update, but apparently Microsoft’s round of patching for December introduces a big problem for some students.

Windows Latest highlights reports from a number of students who are readers of the tech site – and universities themselves – about patch KB5033375 breaking Wi-Fi networks on campus.

Apparently, this isn’t happening to everyone by any means, but it is a serious glitch for some of those running Windows 11 who aren’t getting internet on their own laptop. As Brunel University London (UK), one of the affected unis, informs us, this isn’t happening with official university hardware, but BYOD notebooks (possibly because admins have already side-stepped the issue, perhaps?).

One theory from a system admin at a university, as Windows Latest points out, is that there may be a compatibility issue at play here (involving the Qualcomm QCA61x4a wireless adapter, and maybe others).

Another establishment to warn its students about the December update is the University of New Haven (Connecticut, US), which advises: “A recent Windows update released on 12/12/2023 has caused users to not be able to connect to the wireless networks. This update is known as KB5033375.”

Other reports are present on Reddit, with students in European countries being affected, and the issue seemingly pertaining to other Qualcomm wireless adapters.


Analysis: Update removal seems to be the only way forward, for now

In fairness to the December update, it does contain some useful fixes, including the solution to a longstanding problem with File Explorer randomly popping up on the desktop.

However, if you’re at university, any potential plus points here are likely to be outweighed by the danger of not being able to get on Wi-Fi, which is a nasty problem indeed.

A commonality here seems to be Qualcomm components, and the above mentioned Qualcomm QCA61x4a wireless adapter is a commonly used piece of hardware seen in notebooks such as the Microsoft Surface Laptop 3, Lenovo Yoga models, and many other laptops besides.

This problem also affects some business users, but for students, the only realistic way of resolving the bug is to uninstall the update, as the universities in question are recommending. (To do this, go to Windows Update in Settings, and click to view the Update History – that shows all the updates installed, and you can remove KB5033375 from here).

Hopefully Microsoft is looking into this one, and we’ve contacted the software giant to check if there’s an investigation underway. We’ll update this article if we hear anything back as to what’s going on here.

You might also like…

TechRadar – All the latest technology news

Read More

AI might take a ‘winter break’ as GPT-4 Turbo apparently learns from us to wind down for the Holidays

It seems that GPT-4 Turbo – the most recent incarnation of the large language model (LLM) from OpenAI – winds down for the winter, just as many people are doing as December rolls onwards.

We all get those end-of-year Holiday season chill vibes (probably) and indeed that appears to be why GPT-4 Turbo – which Microsoft’s Copilot AI will soon be upgraded to – is acting in this manner.

As Wccftech highlighted, the interesting observation on the AI’s behavior was made by an LLM enthusiast, Rob Lynch, on X (formerly Twitter).

See more

The claim is that GPT-4 Turbo produces shorter responses – to a statistically significant extent – when the AI believes that it’s December, as opposed to May (with the testing done by changing the date in the system prompt).

So, the tentative conclusion is that it appears GPT-4 Turbo learns this behavior from us, an idea advanced by Ethan Mollick (an Associate Professor at the Wharton School of the University of Pennsylvania who specializes in AI).

See more

Apparently GPT-4 Turbo is about 5% less productive if the AI thinks it’s the Holiday season. 


Analysis: Winter break hypothesis

This is known as the ‘AI winter break hypothesis’ and it’s an area that is worth exploring further.

What it goes to show is how unintended influences can be picked up by an AI that we wouldn’t dream of considering – although some researchers obviously did notice and consider it, and then test it. But still, you get what we mean – and there’s a whole lot of worry around these kinds of unexpected developments.

As AI progresses, its influences, and the direction that the tech takes itself in, need careful watching over, hence all the talk of safeguards for AI being vital.

We’re rushing ahead with developing AI – or rather, the likes of OpenAI (GPT), Microsoft (Copilot), and Google (Bard) certainly are – caught up in a tech arms race, with most of the focus on driving progress as hard as possible, with safeguards being more of an afterthought. And there’s an obvious danger therein which one word sums up nicely: Skynet.

At any rate, regarding this specific experiment, it’s just one piece of evidence that the winter break theory is true for GPT-4 Turbo, and Lynch has urged others to get in touch if they can reproduce the results – and we do have one report of a successful reproduction so far. Still, that’s not enough for a concrete conclusion yet – watch this space, we guess.

As mentioned above, Microsoft is currently upgrading its Copilot AI from GPT-4 to GPT-4 Turbo, which has been advanced in terms of being more accurate and offering higher quality responses in general. Google, meanwhile, is far from standing still with its rival Bard AI, which is powered by its new LLM, Gemini.

You might also like …

TechRadar – All the latest technology news

Read More

Get ready to take your Bing AI chat from the desktop to mobile, without starting over

Microsoft is working on yet another sizable update to Bing AI with this round going to mobile. The latest batch comes just weeks after a previous announcement of various desktop improvements and we have a lot to cover.

Starting off with the Bing app itself, users will be able to add a Bing Chat widget to their “iOS or Android home screen”. This gives you direct access to the AI with the option to either type in your query into the text window or select the microphone icon to ask the question verbally. You can start fresh with a new chat or continue with an old one as Microsoft is enabling the frequently requested “continuous conversations across platforms”. So now a conversation held with Bing on the desktop can continue on mobile devices and vice versa.

The last Bing app update sees the AI gaining new support for multiple countries and languages, which opens it up to more people around the world. Unfortunately, a list of all the newfound support was not included in the post (although we did ask). Microsoft also claims it “improved the quality for non-English chats.” However, the company didn’t provide any details on the level of improvement. 

Expanding support

Moving on to the second app, SwiftKey will have a Compose feature to help you write texts “according to the parameters you suggest”. These parameters include the subject matter, tone, length, and format with the final one being useful for drafting emails. Of course, you can edit those drafts. Two new tones are being added to SwiftKey as well – Witty and Funny – bringing the total to six. So, if you want to have Bing create some eye-rolling dad jokes, you can (just be sure you use this power wisely). To top it all off, the AI-powered Translator on Android will be migrating over to iOS “within the next week” or so.

The Edge browser app is getting Contextual Chat allowing users to ask Bing a question based on the content they’re viewing. The example given is you can ask the AI what the best wine would be to pair with a recipe you're looking at or have it write up a summary of an article you're looking at. Learning will also be made a bit easier thanks to Selected Text Actions. Highlighting a piece of text will open a conversation with Bing where it will then explain that topic in detail complete with “cited sources”.

And last but not least, every single group chat in Skype will have access to the generative AI. All you have to do is tag it by entering “@Bing” into a discussion. 

Bing AI update on mobile apps

(Image credit: Windows)

Availability

The release dates of all these features are all over the place which is why it wasn’t mentioned earlier. The Skype update, SwiftKey Compose tool, and the Bing widget are releasing this week (week of May 14, 2023). Next week, we’ll see continuous conversations alongside the Translator tool. Everything else is unknown other than a vague “soon”. 

We asked Microsoft if it could provide us with dates for the unmentioned features plus a list of the newly supported countries and languages. This story will be updated at a later time.

While we have you check out TechRadar’s list of the best AI tools for 2023 to see what the technology is capable of. It’s not just assistants or content generators.

TechRadar – All the latest technology news

Read More

Apple thinks it has the tools to take your SMB to the next level

After launching in beta last year, Apple has announced that Apple Business Essentials is now available to all small businesses in the US.

The iPhone maker’s new service brings mobile device management, 24/7 Apple support and cloud storage from iCloud together into flexible subscription plans.

Apple Business Essentials is designed to support SMBs throughout the entire device management life cycle from device setup to device upgrades while also providing strong security, prioritized support, data storage and cloud backup. It begins with simple employee onboarding which allows a small business to easily configure, deploy and manage the company’s products from anywhere.

VP of enterprise and education marketing at Apple, Susan Prescott provided further insight on the company’s complete solution for SMBs in a press release, saying

“Apple has a deep and decades-long commitment to helping small businesses thrive. From dedicated business teams in our stores to the App Store Small Business Program, our goal is to help each company grow, compete, and succeed. We look forward to bringing Apple Business Essentials to even more small businesses to simplify device management, storage, support, and repairs. Using this new service leads to invaluable time savings for customers — including those without dedicated IT staff — that they can invest back into their business.”

Apple Business Essentials

One of the most useful features in Apple Business Essentials is Collections which allows groups of apps to be delivered to employees or teams while settings such as VPN configurations, Wi-Fi passwords and more can be automatically pushed to devices.

To get started, employees simply need to sign in to their work account on their iPhone, iPad or Mac using a Managed Apple ID. Once this is done, they will have access to everything they need to be productive including the new Apple Business Essentials app from where they can download their organization’s work apps.

Managed Apple IDs for employees can be created by federating with Microsoft Azure, Azure Director and later this spring with Google Workspace identity services. This allows employees to log into their business laptops using a single business username and passwords.

Apple Business Essentials also works with both company-provided and personal devices and with Apple’s User Enrollment feature, employees’ personal information stays private and cryptographically separated from work data.

In addition to Apple Business Essentials, Apple has announced the launch of AppleCare+ for Business Essentials which provides organizations with 24/7 access to phone support and up to two device repairs per plan per year by individual, group or device. Employees can initiate repairs directly from the Apple Business Essentials app and an Apple-trained technician will come onsite in as little as four hours to get their devices back up and running.

Apple Business Essentials with up to 2TB of iCloud cloud storage starts at $ 2.99 per month after a two-month free trial while plans for AppleCare+ for Apple Business Essentials start at $ 9.99 per month.

TechRadar – All the latest technology news

Read More