Excited about Apple Intelligence? The firm’s exec Craig Federighi certainly is, and has explained why it’ll be a cutting-edge AI for security and privacy

Reactions to Apple Intelligence, which Apple unveiled at WWDC 2024, have ranged from curious to positive to underwhelmed, but whatever your views on the technology itself, a big talking point has been Apple’s emphasis on privacy, in contrast to some companies that have been offering generative AI products for some time. 

Apple is putting privacy front and center with its AI offering and has been keen to talk about how Apple Intelligence – which will be integrated across iOS 18, iPadOS 18, and macOS Sequoia – would differ from its competitors by adopting a fresh approach to handling user information.

Craig Federighi, Apple’s Senior Vice President of Software Engineering, and the main presenter of the WWDC keynote, has been sharing more details about Apple Intelligence, and the company’s privacy-first approach.

Speaking to Fast Company, Federighi explained more about Apple’s overall AI ambitions, confirming that Apple is in agreement with other big tech companies that generative AI is the next big thing – as big a thing as the internet or microprocessors were when they first came about – and that we’re at the beginning of generative AI’s evolution. 

WWDC

(Image credit: Apple)

Apple's commitment to AI privacy

Federighi told Fast Company that Apple is aiming to “establish an entirely different bar” to other AI services and products when it comes to privacy. He reinforced the messaging in the WWDC keynote that the personal aspect of Apple Intelligence is foundational to it and that users’ information will be under their control. He also reiterated that Apple wouldn’t be able to access your information, even while its data centers are processing it. 

The practical measures that Apple is taking to achieve this begin with its lineup of Apple M-series processors, which it claims will be able to run and process many AI tasks on-device, meaning your data won’t have to leave your system. For times when that local processing power is insufficient, the task at hand will be sent to dedicated custom-built Apple servers utilizing Private Cloud Compute (PCC), offering far more grunt for requests that need it – while being more secure than other cloud products in the same vein, Apple claims.

This will mean that your device will only send the minimum information required to process your requests, and Apple claims that its servers are designed in such a way that it’s impossible for them to store your data. This is apparently because after your request is processed and returned to your device, the information is ‘cryptographically destroyed’, and is never seen by anyone at Apple. 

Apple has published a more in-depth security research blog post going into more detail about PCC, which, as noted at WWDC 2024, is a system available to independent security researchers, who can access Apple Intelligence servers in order to verify Apple’s privacy and security claims around PCC.

Apple wants AI to feel like a natural, almost unnoticeable part of its software, and the tech giant is clearly keen to win the trust of those who use its products and to differentiate its take on AI compared with that of rivals. 

WWDC presentation

(Image credit: Apple)

More about ChatGPT and Apple Intelligence in China

Federighi also talks about Apple’s new partnership with OpenAI and the integration of ChatGPT into its operating systems. This is being done to give users access to industry-standard advanced models while reassuring users that ChatGPT isn’t what powers Apple Intelligence; the latter is exclusively driven by Apple’s own large language models (LLMs), which are totally distinct on Apple’s platforms, but you will be able to enlist ChatGPT for more complex requests. 

ChatGPT is only ever invoked at the user’s request and with their permission, and before any requests are sent to ChatGPT you’ll have to confirm that you want to do this explicitly. Apple teamed up with OpenAI to give users this option because, according to Federighi, GPT-4o is “currently the best LLM out there for broad world knowledge.” 

Apple is also considering expanding this concept to include other LLM makers in the future so that you might be able to choose from a variety of LLMs for your more demanding requests. 

Federighi also talked about its plans for Apple Intelligence in China – the company’s second biggest market – and how the company is working to comply with regulations in the country to bring its most cutting-edge capabilities to all customers. This process is underway, but may take a while, as Federighi observed: “We don’t have timing to announce right now, but it’s certainly something we want to do.”

We’ll have to see how Apple Intelligence performs in practice, and if Apple’s privacy-first approach pays off. Apple has a strong track record when it comes to designing products and services that integrate so seamlessly that they become a part of our everyday lives, and it might very well be on track to continue building that reputation with Apple Intelligence.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google explains why AI Overviews couldn’t understand a joke and told users to eat one rock a day – and promises it’ll get better

If you’ve been keeping up with the latest developments in the area of generative AI, you may have seen that Google has stepped up the rollout of its ‘AI Overviews’ section in Google Search to all of the US.

At Google I/O 2024, held on May 14, Google confidently presented AI Overviews as the next big thing in Search that it expected to wow users, and when the feature finally began rolling out the following week it received a less than enthusiastic response. This was mainly due to AI Overviews returning peculiar and outright wrong information, and now, Google has responded by explaining what happened and why AI Overviews performed the way it did (according to Google). 

The feature was intended to bring more complex and better-verbalized answers to user queries, synthesizing a pool of relevant information and distilling it into a few convenient paragraphs. This summary would then be followed by the listed blue links with brief descriptions of the websites that we’re used to. 

Unfortunately for Google, screenshots of AI Overviews that provided strange, nonsensical, and downright wrong information started circulating on social media shortly after the rollout. Google has since pulled the feature, and published an explanatory post on its ‘Keyword’ blog to explain why AI Overviews was doing this, as mentioned – being quick to point out that many of these screenshots were faked. 

What AI Overviews were intended to be

Keynote speech at Google i/o 2024

(Image credit: Future)

In the blog post, Google first explains that the AI Overviews were designed to collect and present information that you would have to dig further via multiple searches to find out otherwise, and to prominently include links to credit where the information comes from, so you could easily follow up from the summary. 

According to Google, this isn’t just its large language models (LLMs) assembling convincing-sounding responses based on existing training data. AI Overviews is powered by its own custom language model that integrates Google’s core web ranking systems, which are used to carry out searches and integrate relevant and high-quality information into the summary. Accuracy is one of the cornerstones that Google prides itself on when it comes to search, the company notes, saying that it built AI Overviews to show information that’s sourced only from the web results it deems the best. 

This means that AI Overviews are generally supposed to hallucinate less than other LLM products, and if things happen to go wrong, it’s probably for a reason that Google also faces when it comes to search, giving the possible issues as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

What actually happened during the rollout

Windows 10 dual screen

(Image credit: Shutterstock / Dotstock)

Google goes on to state that AI Overviews was optimized for accuracy and tested extensively before its wider rollout, but despite these seemingly robust testing efforts, Google does admit that’s not the same as having millions of people trying out the feature with a flood of novel searches. It also points out that some people were trying to provoke its search engine into producing nonsensical AI Overviews by carrying out ridiculous searches. 

I find this part of Google’s explanation a bit odd, seeing as I’d imagine that when building such a feature as AI Overviews, the company would appreciate that folks are likely to try to break it, or send it off the rails somehow, and that it should therefore be designed to handle silly or nonsense searches in its stride.

At any rate, Google then goes on to call out fake screenshots of some of the nonsensical and humorous AI Overviews that made their way around the web, which is fair I think. It reminds us we shouldn’t believe everything we see online, of course, although the faked screenshots looked pretty good if you didn't scrutinize them too closely (and all this underscores the need to check AI-generated features, anyway).

Google does admit, though, that sometimes AI Overviews did produce some odd, inaccurate, or unhelpful responses. It elaborates by explaining that there are multiple reasons why these happened, and that this whole episode has highlighted specific areas where AI Overviews could be improved.

The tech company further observes that these questionable AI Overviews would appear on searches for queries that didn’t happen often. A Threads user, @crumbler, posted an AI Overviews screenshot that went viral after they asked Google: “how many rocks should i eat?” This returned an AI Overview that recommended eating at least one small rock per day. Google’s explanation is that before this screenshot circulated online, this question had rarely been asked in search (which is certainly believable enough). 

A screenshot of an AI Overview recommending that humans should eat one small rock a day

(Image credit: Google/@crumbler on Threads)

Google continues to explain that there isn’t a lot of quality source material to answer that question seriously, either, calling instances when this happens a “data void” or an “information gap.” Additionally, in the case of the query above, some of the only content that was available was satirical by nature, and was linked in earnest as one of the only websites that addressed the query. 

Other nonsensical and silly AI Overviews pulled details from sarcastic or humorous content sources, and the likes of troll posts from discussion forums.

Google's next steps and the future of AI Overviews

When explaining what it’s doing to fix and improve AI Overviews, or any part of its Search results, Google notes that it doesn’t go through Search results pages one by one. Instead, the company tries to implement updates that affect whole sets of queries, including possible future queries. Google claims that it’s been able to identify patterns when analyzing the instances where AI Overviews got things wrong, and that it’s put in a whole set of new measures to continue to improve the feature.

You can check out the full list in Google’s post, but better detection capabilities for nonsensical queries trying to provoke a weird AI Overview are being implemented, and the search giant is looking to limit the inclusion of satirical or humorous content.

Along with the new measures to improve AI Overviews, Google states that it’s been monitoring user feedback and external reports, and that it’s taken action on a small number of summaries that violate Google’s content policies. This happens pretty rarely – in less than one in seven million unique queries, according to Google – and it’s being addressed.

The final reason Google gives for why AI Overviews performed this way is just the sheer scale of the billions of queries that are performed in Search every day. I can’t say I fault Google for that, and I would hope it ramps up the testing it does on AI Overviews even as the feature continues to be developed.

As for AI Overviews not understanding sarcasm, this sounds like a cop-out at first, but sarcasm and humor in general is a nuance of human communication that I can imagine is hard to account for. Comedy is a whole art form in itself, and this is going to be a very thorny and difficult area to navigate. So, I can understand that this is a major undertaking, but if Google wants to maintain a reputation for accuracy while pushing out this new feature – it’s something that’ll need to be dealt with.

We’ll just have to see how Google’s AI Overviews perform when they are reintroduced – and you can bet there’ll be lots of people watching keenly (and firing up yet more ridiculous searches in an effort to get that viral screenshot).

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Meta Quest 3 Lite leak suggests it’ll pack the Quest 3’s brain into the Quest 2’s body

Meta may still be remaining schtum about the Meta Quest 3 Lite (or the Quest 3s, as some rumors are calling it), but that hasn’t stopped leaks from seeping out into the public sphere. The latest info dump tells us seemingly everything about the budget-friendly hardware’s technical specifications.

These latest details come via @Lunayian on Twitter who claims to “have seen multiple devkits and spoken to several people familiar with the device.” They then include an infographic that outlines the details they “feel comfortable sharing.” 

See more

In many ways this Meta Quest 3 alternative shares a lot of similarities with the original. Chiefly, it boasts a Snapdragon XR2 Gen 2 chipset from Qualcomm, the same tracking ring-less controllers, and the same two 4MP RGB passthrough cameras from full-color mixed reality.

But you would notice some downgrades borrowed from the Quest 2. This includes the screen resolution which is just 1,832 x 1,920 pixels per eye rather than the Quest 3’s 2,064 × 2,208 pixels; a bulky fresnel lens system instead of the Quest 3’s slimmer pancake lenses; and rather than gradual IPD (InterPupillary Distance) adjustments we've returned to the Quest 2’s three set positions.

Basically, this leak suggests the Quest 3 Lite has the Quest 3’s brain, and the Quest 2’s body.

The Oculus Quest 2 headset sat on top of its box and next to its controllers

The Quest 2’s bulk could make a comeback (Image credit: Shutterstock / agencies)

One key detail we’re still missing is the price. 

According to previous leaks the Quest 3 Lite will be cheaper than the Meta Quest 3 – something supported by the specs leaked here – but it’s unclear exactly how much it will cost. 

Adopting the Oculus Quest 2’s launch price of $ 299 / £299 / AU$ 479 seems most likely, but given the Quest 3 Lite offers most of the Quest 3’s upgrades we wouldn’t be shocked if the Lite landed somewhere around $ 399 / £399 / AU$ 639 – in between the Quest 2 and Quest 3 launches (the Quest 3 costs $ 499 / £479 / AU$ 799).

One thing we can say with some confidence is the Quest 2’s current $ 199.99 / £199.99 / AU$ 359.99 price is almost certainly far too low for this rumored upcoming model – so if you’re after a super-cheap VR headset the Quest 2 might be your best bet while it’s available. Although given we’re starting to see more and more Quest 3 exclusives, it might not be the best long-term buy.

Wait before you buy a Quest 2

As we always recommend, you should take this rumor with a pinch of salt. Until Meta announces the Quest 3 Lite, Quest 3s, or whatever it chooses to call it, we don’t know when or if this budget-friendly VR headset will launch.

But it seems very likely that something is on the way – and I have a feeling we might see it soon as Meta usually hosts a June gaming showcase, which could be the perfect place to announce this new device.

If you’re looking to buy one of the best VR headsets, I'd recommend waiting – unless you’re dead set on getting a Meta Quest 3. That’s doubly true if the headset you have your sights set on is the Quest 2 as this Lite model looks set to beat it in the most important ways and hopefully won’t break the bank either.

You might also like

TechRadar – All the latest technology news

Read More

Google explains how Gemini’s AI image generation went wrong, and how it’ll fix it

A few weeks ago Google launched a new image generation tool for Gemini (the suite of AI tools formerly known as Bard and Duet) which allowed users to generate all sorts of images from simple text prompts. Unfortunately, Google’s AI tool repeatedly missed the mark and generated inaccurate and even offensive images that led a lot of us to wonder – how did the bot get things so wrong? Well, the company has finally released a statement explaining what went wrong, and how it plans to fix Gemini. 

The official blog post addressing the issue states that when designing the text-to-image feature for Gemini, the team behind Gemini wanted to “ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people.” The post further explains that users probably don’t want to keep seeing people of just one ethnicity or other prominent characteristic. 

So, to offer a pretty basic explanation for what’s been going on: Gemini has been throwing up images of people of color when prompted to generate images of white historical figures, giving users ‘diverse Nazis’, or simply ignoring the part of your prompt where you’ve specified exactly what you’re looking for. While Gemini’s image capabilities are currently on hold, when you could access the feature you’d specify exactly who you’re trying to generate – Google uses the example “a white veterinarian with a dog” – and Gemini would seemingly ignore the first half of that prompt and generate veterinarians of all races except the one you asked for. 

Google went on to explain that this was the outcome of two crucial failings – firstly, Gemini was showing a range of different people without considering a range not to show. Alongside that, in trying to make a more conscious, less biased generative AI, Google admits the “model became way more cautious than we intended and refused to answer certain prompts entirely – wrongly interpreting some very anodyne prompts as sensitive.”

So, what's next?

At the time of writing, the ability to generate images of people on Gemini has been paused while the Gemini team works to fix the inaccuracies and carry out further testing. The blog post notes that AI ‘hallucinations’ are nothing new when it comes to complex deep learning models – even Bard and ChatGPT had some questionable tantrums as the creators of those bots worked out the kinks. 

The post ends with a promise from Google to keep working on Gemini’s AI-powered people generation until everything is sorted, with the note that while the team can’t promise it won’t ever generate “embarrassing, inaccurate or offensive results”, action is being taken to make sure it happens as little as possible. 

All in all, this whole episode puts into perspective that AI is only as smart as we make it. Our editor-in-chief Lance Ulanoff succinctly noted that “When an AI doesn't know history, you can't blame the AI.” With how quickly artificial intelligence has swooped in and crammed itself into various facets of our daily lives – whether we want it or not – it’s easy to forget that the public proliferation of AI started just 18 months ago. As impressive as the tools currently available to us are, we’re ultimately still in the early days of artificial intelligence. 

We can’t rain on Google Gemini’s parade just because the mistakes were more visually striking than say, ChatGPT’s recent gibberish-filled meltdown. Google’s temporary pause and reworking will ultimately lead to a better product, and sooner or later we’ll see the tool as it was meant to be. 

You might also like…

TechRadar – All the latest technology news

Read More

Assistant with Bard video may show how it’ll work and when it could land on your Pixel

New footage has leaked for Google’s Assistant with Bard demonstrating how the digital AI helper could work at launch.

Nail Sadykov posted the video on X (the platform formerly known as Twitter) after discovering the feature on the Pixel Tips app. Apparently, Google accidentally spilled the beans on its tech, so it’s probably safe to say this is legitimate. It looks like something you would see in one of the company’s Keyword posts explaining the feature in detail except there’s no audio.

There will, based on the clip, be two ways to activate Assistant with Bard: either by tapping the Bard app and saying “Hey Google” or pressing and holding the power button. A multimodular input box rises from the bottom where you can type in a text prompt, upload photos, or speak a verbal command. The demo proceeds to only show the second method by having someone take a picture of a wilting plant and then verbally ask for advice on how to save it. 

See more

A few seconds later, Assistant with Bard manages to correctly identify the plant in the image (it’s a spider plant, by the way) and generates a wall of text explaining what can be done to revitalize it. It even links to several YouTube videos at the end.

Assistant with Bard has something of a badly kept secret. It was originally announced back in October 2023 but has since seen multiple leaks. The biggest info dump by far occurred in early January revealing much of the user experience as well as “various in-development features.” What’s been missing up to this point is news on whether or not Assistant with Bard will have any sort of limitations. As it turns out, there may be a few restrictions.

Assistant Limitations

Mishaal Rahman, another industry insider, dove into Pixel Tips searching for more information on the update. He claims Assistant with Bard will only appear on single-screen Pixel smartphones powered by a Tensor chip. This includes the Pixel 6, Pixel 7, and Pixel 8 lines. Older models will not receive the upgrade and neither will the Pixel Tablet, Pixel Fold, or the “rumored Pixel Fold 2”.

Additionally, mobile devices must be running the Android 14 QPR2 beta “or the upcoming stable QPR2 release” although it’s most likely going to be the latter. Rahman states he found a publication date in the Pixels Tip app hinting at a March 2024 release. It’s important to point out that March is also the expected launch window for Android 14 QPR2 and the next Feature Drop for Pixel phones.

No word on whether or not other Android devices will receive Assistant with Bard. It seems it’ll be exclusive to Pixel for the moment. We could see the update elsewhere, however considering that key brands, like Samsung, prefer having their own AI, an Assistant with Bard expansion seems unlikely. But we could be wrong.

Until we learn more, check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini is its most powerful AI brain so far – and it’ll change the way you use Google

Google has announced the new Gemini artificial intelligence (AI) model, an AI system that will power a host of the company’s products, from the Google Bard chatbot to its Pixel phones. The company calls Gemini “the most capable and general model we’ve ever built,” claiming it would make AI “more helpful for everyone.”

Gemini will come in three 'sizes': Ultra, Pro and Nano, with each one designed for different uses. All of them will be multimodal, meaning they’ll be able to handle a wide range of inputs, with Google saying that Gemini can take text, code, audio, images and video as prompts.

While Gemini Ultra is designed for extremely demanding use cases such as in data centers, Gemini Nano will fit in your smartphone, raising the prospect of the best Android smartphones gaining a significant AI advantage.

With all of this new power, Google insists that it conducted “rigorous testing” to identify and prevent harmful results arising from people’s use of Gemini. That was challenging, the company said, because the multimodal nature of Gemini means two seemingly innocuous inputs (such as text and an image) can be combined to create something offensive or dangerous.

Coming to all your services and devices

Google has been under pressure to catch up with OpenAI’s ChatGPT and its advanced AI capabilities. Just a few days ago, in fact, news was circulating that Google had delayed its Gemini announcement until next year due to its apparent poor performance in a variety of languages. 

Now, it turns out that news was either wrong or Google is pressing ahead despite Gemini’s rumored imperfections. On this point, it’s notable that Gemini will only work in English at first.

What does Gemini mean for you? Well, if you use a Pixel 8 Pro phone, Google says it can now run Gemini Nano, bringing all of its AI capabilities to your pocket. According to a Google blog post, Gemini is found in two new Pixel 8 Pro features: Smart Reply in Gboard, which suggests message replies to you, and Summarize in Recorder, which can sum up your recorded conversations and presentations.

The Google Bard chatbot has also been updated to run Gemini, which the company says is “the biggest upgrade to Bard since it launched.” As well as that, Google says that “Gemini will be available in more of our products and services like Search, Ads, Chrome and Duet AI” in the coming months, Google says.

As part of the announcement, Google revealed a slate of Gemini demonstrations. These show the AI guessing what a user was drawing, playing music to match a drawing, and more.

Gemini vs ChatGPT

Google Gemini revealed at Google I/O 2023

(Image credit: Google)

It’s no secret that OpenAI’s ChatGPT has been the most dominant AI tool for months now, and Google wants to end that with Gemini. The company has made some pretty bold claims about its abilities, too.

For instance, Google says that Gemini Ultra’s performance exceeds current state-of-the-art results in “30 of the 32 widely-used academic benchmarks” used in large language model (LLM) research and development. In other words, Google thinks it eclipses GPT-4 in nearly every way.

Compared to the GPT-4 LLM that powers ChatGPT, Gemini came out on top in seven out of eight text-based benchmarks, Google claims. As for multimodal tests, Gemini won in all 10 benchmarks, as per Google’s comparison.

Does this mean there’s a new AI champion? That remains to be seen, and we’ll have to wait for more real-world testing from independent users. Still, what is clear is that Google is taking the AI fight very seriously. The ball is very much in OpenAI’s (and Microsoft's) court now.

You might also like

TechRadar – All the latest technology news

Read More

Meta was late to the AI party – and now it’ll never beat ChatGPT

Meta – the tech titan formerly known as Facebook – desperately wants to take pole position at the forefront of AI research, but things aren’t exactly going to plan.

As reported by Gizmochina, Meta lost a third of its AI research staff in 2022, many of whom cited burnout or lack of faith in the company’s leadership as their reasons for departing. An internal survey from earlier this year showed that just 26% of employees expressed confidence in Meta’s direction as a business.

CEO Mark Zuckerberg hired French computer scientist and roboticist Yann LeCun to lead Meta’s AI efforts back in 2013, but in more recent times Meta has visibly struggled to keep up with the rapid pace of AI expansion demonstrated by competing platforms like ChatGPT and Google Bard. LeCun was notably not among the invitees to the White House’s recent Companies at the Frontier of Artificial Intelligence Innovation summit.

That’s not to say that Meta is failing completely in the AI sphere; recent reveals like a powerful AI music creator and a speech-generation tool too dangerous to release to the public show that the Facebook owner isn’t exactly sitting on its hands when it comes to AI development. So why is it still lagging behind?

Abstract artwork promoting Meta's new Voicebox AI.

Meta’s AI ‘Voicebox’ tool is almost terrifyingly powerful – so terrifying, in fact, that Meta isn’t releasing it to the public (Image credit: Meta)

Change of direction

The clue’s in the name: remember back in 2021, when the then-ubiquitous Facebook underwent a total rebrand to become ‘Meta’? At the time, it was supposed to herald a new era of technology, led by our reptilian overlord Zuckerberg. Enter the metaverse, he said, where your wildest dreams can come true.

Two years down the line, it’s become pretty clear that his Ready Player Zuck fantasies aren’t going to materialize; at least, not for quite a while. AI, on the other hand, really is the new technology frontier – but Meta’s previous obsession with the metaverse has left it on the back foot in the AI gold rush.

Even though Meta has now shifted to AI as its prime area of investment and has maintained an AI research department for years, it’s fair to say that the Facebook owner failed to capitalize on the AI boom late last year. According to Gizmochina, employees have been urging management to shift focus back towards generative AI, which fell by the wayside in favor of the company’s metaverse push.

Meta commentary

A female-presenting person works at her desk in Meta's Horizons VR

Meta’s virtual Horizon workspace was never going to take off, let’s be honest (Image credit: Meta)

Perhaps Meta is simply spread too thin. Back in February, Zuckerberg described 2023 as the company’s “year of efficiency” – a thin cover for Meta’s mass layoffs and project closures back in November 2022, which have seen internal morale fall to an all-time low. Meta is still trying to push ahead in the VR market with products like the Meta Quest Pro, and recently announced it would be releasing a Twitter rival, supposedly called ‘Threads’.

In any case, it seems that Meta might have simply missed the boat. ChatGPT and Microsoft’s Bing AI are already making huge waves in the wider public sphere, along with the best AI art generators such as Midjourney.

It’s hard to see where Meta’s AI projects will fit in the current lineup; perhaps Zuckerberg should just stick to social media instead. Or maybe we'll see Meta pull another hasty name-change to become 'MetAI' or something equally ridiculous. The possibilities are endless!

TechRadar – All the latest technology news

Read More

Grammarly’s ChatGPT upgrade won’t just improve your writing, it’ll do it for you

Grammarly will soon no longer just recommend ways for you to improve your writing, it’ll do the writing for you.

The writing assistant Grammarly already uses AI in several ways to help it act as a clever tool. Not only can it pick up common grammar and spelling mistakes, but it can also recommend ways to better structure your sentences, and can even tell you the tone your writing portrays (with adjectives like Formal, Confident, Accusatory, and Egocentric).

Come April, Grammarly will be taking its help a step further with the introduction of GrammarlyGo.

Built on OpenAI’s GPT-3 large language models (OpenAI is the team behind ChatGPT), GrammarlyGo will be able to perform a slew of different functions. If you have a document that’s already been written, GrammarlyGo will be able to edit it to portray a different tone or change the length to make your writing clearer or more succinct. Alternatively, if you’re experiencing a writing block its ideation tools will supposedly help unlock your creativity by creating brainstorms and outlines based on prompts you provide.

The press release announcement says it won’t stop at outlines either. GrammarlyGo will be able to compose whole documents for you, and it can even generate replies to emails based on the context of the conversation.

(Image credit: Grammarly)

We haven’t yet had a chance to try GrammarlyGo for ourselves, but we expect it’ll perform similarly to other ChatGPT alternatives we've tested. Specifically, we imagine it’ll show a lot of promise, but its compositions will almost certainly need to be proofread and tweaked by a human – especially while it’s still in beta. Even when given prompts to work with we’ve found that AI writing bots can struggle to generate content that sounds authoritative. Sure, they can produce 400 words about, say, VR headsets, but the writing is often full of chaff and sprinkled with buzzwords rather than feeling like it’s written by someone that understands the topic.

GrammarlyGo’s beta will launch in April (we don’t have an exact date yet) and will be available to all Grammarly Premium, Grammarly Business, and Grammarly for Education subscribers. It’ll also be accessible to people using the free version of Grammarly in the US, UK, Australia, Canada, and New Zealand.

It’s not just writing that OpenAI’s tech is helping to improve. Spotify has launched an AI DJ that can talk to you while mixing your favorite tracks, and Microsoft has incorporated ChatGPT into its search engine to create the impressive Bing Chat tool.

TechRadar – All the latest technology news

Read More

It’ll soon be easier to track down all your lost Google Workspace docs

Tracking down that elusive Google Docs or Sheets file could soon get a lot simpler thanks to a new search upgrade.

The company has revealed it is adding a new setting to its search history tool specifically designed to find files created in its Google Workspace office software suite.

The new addition will hopefully be able to track down and display those hard-to-find files directly in your search history, removing a common headache for workers everywhere.

Google My Activity

The change is coming to the Google  – My Activity page, which contains all the details of your recent searches across both the web and Google's own apps, such as YouTube.

Going forward, search data from Workspace apps will be contained in a new setting, which will allow users to see suggestions from their own search history.

Past searches can be rerun if necessary, and will cover the likes of Gmail, Google Drive, Calendar, and Currents, along with standalone services such as Google Cloud and Google Sites.

Google says it doesn't utilize any of this data for targeted advertising, and deletes all search history data after 18 months (although this can be reset to delete at 3, 18 or 36 months) and users can amend, expand or restrict the amount of data collected on them at any time.

The new setting will begin rolling out on March 29, and will be set to on by default. Users can disable it by heading to My Activity page > Other Google activity > Google Workspace search history.

The news comes shortly after Google unveiled a new look for Gmail that aims to combine several of the most popular Workspace apps in one window.

The approach looks to provide users with a one-stop shop for all their communication needs – whether via email, video conferencing, or just good old-fashioned instant messaging – without them having to open up extra tabs or windows.

  • Keep your online activity protected with the best VPN services around

Via 9to5Google

TechRadar – All the latest technology news

Read More