Anthropic has upgraded its Claude generative AI assistant to be more useful in the office. Claude Pro and Claude Team subscribers can now better organize and track their work with the AI assistant thanks to the new Projects and Artifacts features.
Projects are both a place for storing and interacting with data for tasks. Users can upload all their documents, code, and other relevant data into one place. Each project within Claude.ai includes a 200K context window, equivalent to a 500-page book. They can then ask Claude about it, and even set up custom instructions for how to respond in terms of tone or the context of who is asking and what they might need. The idea is to avoid what Anthropic calls “cold starts,” where users must start from scratch each time they engage with the AI assistant. By having a knowledge base to draw from, Claude can respond to queries more quickly and accurately.
Office Artifacts
The Artifacts feature is a kind of flip of the Projects in that it can produce a wide range of content the way Projects can store them. Users can ask Claude to make text, code, and other 'artifacts.' Claude will share the output in a dedicated window alongside the chat, like a preview window of what it's composing. This setup enables users to see and interact with the generated content in real time, providing immediate feedback and adjustments where needed. An additional upgrade lets users share the best bits of conversations with Claude with their team in a shared project activity feed.
Both Projects and Artifacts are powered by Claude 3.5 Sonnet, Anthropic’s latest AI model. According to the company, Claude 3.5 Sonnet outperforms recently announced models like GPT-4o and Google's Gemini 1.5 on a variety of benchmarks.
“Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows,” Anthropic explained in a blog post. “With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results.”
Google is updating its NotebookLM writing assistant with improved performance and new features, among other things. It now runs on the company’s Gemini 1.5 Pro model, the same AI that powers the Gemini Advanced chatbot.
Thanks to this functionality, the assistant is more contextually aware, allowing you to ask “questions about [the] images, charts, and diagrams in your source” in order to gain a better understanding.
Although Gemini 1.5 Pro is part of NotebookLM, it's unknown if this means the AI will accept longer text prompts or create more detailed answers. After all, Google's AI can handle context windows of up to a million tokens. We reached out to the tech giant to ask if people can expect to see support for bigger prompts.
Prompts will mostly stay the same, although they’ll now provide inline citations in the form of encircled numbers. Clicking on one of these citations takes you directly to supporting passages inside source documents. That way, you can double-check the material to see if NotebookLM got things right.
AI hallucinations continue to be a problem for the tech, so it’s important that people are able to fact-check outputs. When it comes to images, opening the citation causes the source picture to appear in a small window next to the text.
Upgraded sourcing
Support for information sources is expanding to now include “Google Slides and web URLs” alongside PDFs, text files, and Google Docs. A new feature called Notebook Guide is being added, too. What this does is give you the opportunity to rearrange the data you enter into a specific format like a series of FAQs or a Study Guide. It could be quite handy.
The Guide sees other changes being made, though they’re not included in the initial announcement. For instance, you can have up to 50 different sources per project, and each one can be up to 500,000 words long, according to TheVerge. Prior to this, users could only have five sources at once, so it’s a big step up.
Raiza Martin, who is a senior product manager at Google Labs, also told the publication that “NotebookLM is a closed system.” This means the AI won’t perform any web searches beyond what you, the user, give it in a prompt. Every response it generates pertains only to the information it has on hand.
NotebookLM’s latest update is live now and is rolling out to “over 200 countries and territories around the world.” You can head over to the AI’s official website to try out the new features. But, do keep in mind that NotebookLM is still considered to be experimental and you may run into some quirks. TheVerge, for instance, claimed the URL source function didn’t work in their demo. However, in our experience, the tool worked just fine.
Be sure to check out TechRadar's list of the best business laptops for 2024, if you plan on using the assistant at work.
OpenAI's high-profile run-in with Scarlett Johansson is turning into a sci-fi story to rival the move Her, and now it's taken another turn, with OpenAI sharing documents and an updated blog post suggesting that its 'Sky' chatbot in the ChatGPT app wasn't a deliberate attempt to copy the actress's voice.
OpenAI preemptively pulled its 'Sky' voice option in the ChatGPT app on May 19, just before Scarlett Johansson publicly expressed her “disbelief” at how “eerily similar” it sounded to her own (in a statement shared with NPR). The actress also revealed that OpenAI CEO Sam Altman had previously approached her twice to license her voice for the app, and that she'd declined on both occasions.
But now OpenAI is on the defensive, sharing documents with The Washington Post suggesting that its casting process for the various voices in the ChatGPT app was kept entirely separate from its reported approaches to Johansson.
The documents, recordings and interviews with people involved in the process suggest that “an actress was hired to create the Sky voice months before Altman contacted Johansson”, according to The Washington Post.
The agent of the actress chosen for the Sky voice also apparently confirmed that “neither Johansson nor the movie “Her” were ever mentioned by OpenAI” during the process, nor was the actress's natural speaking voice tweaked to sound more like Johansson.
OpenAI's lead for AI model behavior, Joanne Jang, also shared more details with The Washington Post on how the voices were cast. Jang stated that she “kept a tight tent” around the AI voices project and that Altman was “not intimately involved” in the decision-making process, as he was “on his world tour during much of the casting process”.
With Johansson now reportedly lawyering up in her battle with OpenAI, the case looks likely to continue for some time.
Interestingly, the case isn't completely without precedent, despite the involvement of new tech. As noted by Mitch Glazier (chief executive of the Recording Industry Association of America), there was a similar case in the 1980s involving Bette Midler and the Ford Motor Company.
After Midler declined Ford's request to use her voice in a series of ads, Ford hired an impersonator instead – which resulted in a legal battle that Midler ultimately won, after a US court found that her voice was distinctive and should be protected against unauthorized use.
OpenAI is now seemingly distancing itself from suggestions that it deliberately did something similar with Johansson in its ChatGPT app, highlighting that its casting process started before Altman's apparent approaches to the actress.
This all follows an update to OpenAI's blog post, which included a statement from CEO Sam Altman claiming: “The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
But Altman's post on X (formerly Twitter) just before OpenAI's launch of GPT-4o, which simply stated “her”, doesn't help distance the company from suggestions that it was attempting to recreate the famous movie in some form, regardless of how explicit that was in its casting process.
Rumors started circulating earlier this year claiming Amazon was working on improving Alexa by giving it new generative AI features. Since then, we haven’t heard much about it until very recently when CNBC spoke to people familiar with the project’s development. The new reporting provided insight into what the company aims to do with the upgraded Alexa, how much it may cost, and the reason why Amazon is doing this.
CNBC’s sources were pretty tight-lipped. They didn’t reveal exactly what the AI will be able, but they did mention the tech giant’s goals. Amazon wants its developers to create something “that holds up amid the new AI competition,” referring to the likes of ChatGPT. Company CEO Andy Jazzy was reportedly “underwhelmed” with the modern-day Alexa and he isn’t the only one who wants the assistant to do more. Reportedly, the dev team is seemingly worried the model currently amounts to just being an “expensive alarm clock.”
To facilitate the new direction, Amazon reorganized major portions of its business within the Alexa team, shifting focus toward achieving artificial general intelligence.
AGI is a concept from science fiction, but it’s the idea that an AI model may one day match or surpass the intelligence of a human being. Despite their lofty goals, Amazon seems to be starting small by wanting to create its own chatbot with generative capabilities.
The sources state, “Amazon will use its own large language model, Titan, in the Alexa upgrade.” Titan is only available to businesses as a part of Amazon Bedrock. It can generate text, create images, summarize documents, and more for enterprise users, similar to other AIs. Following this train of thought, the new Alexa could offer the same features to regular, non-enterprising users.
Potential costs
Previous reports have said Amazon plans to charge people for access to the supercharged Alexa; however, the cost or plan structure were unknown. Now, we’re learning Amazon is planning to launch the Alexa upgrade as a subscription service completely separate from Prime, meaning people will have to pay extra to try out the AI, according to this new report.
Apparently, there’s been debate on exactly how much to charge. Amazon has yet to nail down the monthly fee. One of the sources told CNBC that “a $ 20 price point was floated” around at one point while someone else suggested dropping costs down to “single-digit dollar [amounts].” So, in other words, less than $ 10, which would allow the brand to undercut rivals. OpenAI, for example, charges $ 20 a month for its Plus plan.
There is no word on when Alexa’s update will launch or even be formally announced. But if and when it does come out, it might be the first chatbot accessible through an Amazon smart speaker like the Echo Pop.
We did reach out to the company to see if it wanted to make a statement about CNBC’s report. We’ll update this story if we hear back.
Almost everyone in tech is investing heavily in artificial intelligence right now, and Google is among those most committed to an AI future. Project Astra, unveiled at Google I/O 2024, is a big part of that – and it could end up being one of Google's most important AI tools.
Astra is being billed as “a universal AI agent that is helpful in everyday life”. It's essentially something like a blending of Google Assistant and Google Gemini, with added features and supercharged capabilities for a natural, conversational experience.
Here, we're going to explain everything you need to know about Project Astra – how it works, what it can do, when you can get it, and how it might shape the future.
What is Project Astra?
In some ways, Project Astra isn't any different to the AI chatbots we've already got: you ask a question about what's in a picture, or about how to do something, or request some creative text to be generated, and Astra gets on with it.
What elevates this particular AI project is its multimodal functionality (the way text, images, video, and audio can all be combined), the speed that the bot works at, and how conversational it is. Google's aim, as we've already mentioned, is to create “a universal AI agent” that can do anything and understand everything.
Think about the Hal 9000 bot in Kubrick's 2001: A Space Odyssey, or the Samantha assistant in the movie Her: talking to them is like talking to a human being, and there isn't much they can't do. (Both those AIs eventually got too big for their creators to control, but let's ignore that for the time being.)
Project Astra has been built to understand context and to take actions, to be able to work in real time, and to remember conversations from the past. From the demos we've seen so far, it works on phones and on smart glasses, and is powered by the Google Gemini AI models – so it may eventually be part of the Gemini app, rather than something that's separate and standalone.
When is Project Astra coming out?
Project Astra is in its early stages: this isn't something that's going to be available to the masses for a few months at least. That said, Google says that “some of these agent capabilities will come to Google products like the Gemini app later this year”, so it looks as though elements of Astra will appear gradually in Google's apps as we go through 2024.
When we were given some hands-on time with Project Astra at I/O 2024, these sessions were limited to four minutes each – so that gives you some idea of how far away this is from being something that anyone, anywhere can make use of. What's more, the Astra kit didn't look particularly portable, and the Google reps were careful to refer to it as a prototype.
Taking all that together, we get the impression that some of the Project Astra tricks we've seen demoed might appear in the Google Gemini app sooner rather than later. At the same time, the full Astra experience – perhaps involving some dedicated hardware – is probably not going to be rolling out until 2025 at the earliest.
Now that Google has shared what Project Astra is and what it's capable of, it's likely that we're going to hear a whole lot more about it in the months ahead. Bear in mind that ChatGPT and Dall-E developer OpenAI is busy pushing out major upgrades of its own, and Google isn't going to want to be left behind.
What can I do with Project Astra?
One of Google's demos shows Astra running on a phone, using its camera input and talking naturally to a user: it's asked to flag up something in view that can play sounds, and correctly identifies a speaker. When an arrow is drawn on screen, Astra then recognizes and talks about the speaker component highlighted by the arrow.
In another demo, we see Astra correctly identifying world landmarks from drawings in a sketchbook. It's also able to remember the order of objects in a list, identify a neighborhood from an image, understand the purpose of sections of code that are shown to it, and solve math problems that are written out.
There's a lot of emphasis on recognizing objects, drawings, text, and more through a camera system – while at the same time understanding human speech and generating appropriate responses. This is the multimodal part of Project Astra in action, which makes it a step up from what we already have – with improvements in caching, recording, and processing key to the real time responsiveness.
In our hands-on time with Project Astra, we were able to get it to tell a story based on objects that we showed to the camera – and adapt the story as we went on. Further down the line, it's not difficult to imagine Astra applying these smarts as you explore a city on vacation, or solve a physics problem on a whiteboard, or provide detailed information about what's being shown in a sports game.
Which devices will include Project Astra?
In the demonstrations of Project Astra that Google has shown off so far, the AI is running on an unidentified smartphone and an unidentified pair of smart glasses – suggesting that we might not have heard the last of Google Glass yet.
Google has also hinted that Project Astra is going to be coming to devices with other form factors. We've already mentioned the Her movie, and it's well within the realms of possibility that we might eventually see the Astra bot built into wireless earbuds (assuming they have a strong enough Wi-Fi connection).
In the hands-on area that was set up at Google I/O 2024, Astra was powered through a large camera, and could only work with a specific set of objects as props. Clearly, any device that runs Astra's impressive features is going to need a lot of on-board processing power, or a very quick connection to the cloud, in order to keep up the real-time conversation that's core to the AI.
As time goes on and technology improves, though, these limitations should slowly begin to be overcome. The next time we hear something major about Project Astra could be around the time of the launch of the Google Pixel 9 in the last few months of 2024; Google will no doubt want to make this the most AI-capable smartphone yet.
So no, OpenAI didn’t roll out a search engine competitor to take on Google at its May 13, 2024 Spring Update event. Instead, OpenAI unveiled GPT-4 Omni (or GPT-4o for short) with human-like conversational capabilities, and it's seriously impressive.
Beyond making this version of ChatGPT faster and free to more folks, GPT-4o expands how you can interact with it, including having natural conversations via the mobile or desktop app. Considering it's arriving on iPhone, Android, and desktop apps, it might pave the way to be the assistant we've all always wanted (or feared).
OpenAI's ChatGPT-4o is more emotional and human-like
GPT-4o has taken a significant step towards understanding human communication in that you can converse in something approaching a natural manner. It comes complete with all the messiness of real-world tendencies like interrupting, understanding tone, and even realizing it's made a mistake.
During the first live demo, the presenter asked for feedback on his breathing technique. He breathed heavily into his phone, and ChatGPT responded with the witty quip, “You’re not a vacuum cleaner.” It advised on a slower technique, demonstrating its ability to understand and respond to human nuances.
So yes, ChatGPT has a sense of humor but also changes the tone of responses, complete with different inflections while conveying a “thought”. Like human conversations, you can cut the assistant off and correct it, making it react or stop speaking. You can even ask it to speak in a certain tone, style, or robotic voice. Furthermore, it can even provide translations.
In a live demonstration suggested by a user on X (formerly Twitter), two presenters on stage, one speaking English and one speaking Italian, had a conversation with Chat GPT-4o handling translation. It could quickly deliver the translation from Italian to English and then seamlessly translate the English response back to Italian.
It’s not just voice understanding with GPT-4o, though; it can also understand visuals like a written-out linear equation and then guide you through how to solve it, as well as look at a live selfie and provide a description. That could be what you're wearing or your emotions.
In this demo, GPT said the presenter looked happy and cheerful. It’s not without quirks, though. At one point ChatGPT said it saw the image of the equation before it was even written out, referring back to a previous visual of just a wooden tabletop.
Throughout the demo, ChatGPT worked quickly and didn't really struggle to understand the problem or ask about it. GPT-4o is also more natural than typing in a query, as you can speak naturally to your phone and get a desired response – not one that tells you to Google it.
A little like “Samantha” in “Her”
If you’re thinking about Her or another futuristic-dystopian film with an AI, you’re not the only one. Speaking with ChatGPT in such a natural way is essentially the Her moment for OpenAI. Considering it will be rolling out to the mobile app and as a desktop app for free, many people may soon have their own Her moments.
The impressive demos across speech and visuals feel may only be scratching the surface of what's possible. Overall performance and how well GPT-4o performs day-to-day in various environments remains to be seen, and once available, TechRadar will be putting it through the test. Still, after this peek, it's clear that GPT-4o is preparing to take on the best Google and Apple have to offer in their eagerly-anticipated AI reveals.
The outlook on GPT-4o
However, announcing this the day before Google I/O kicks off and just a few weeks after we’ve seen new AI gadgets hit the scene – like the Rabbit R1 – OpenAI is giving us a taste of truly useful AI experiences we want. If this rumored partnership with Apple comes to fruition, Siri could be supercharged, and Google will almost certainly show off its latest AI tricks at I/O on May 14, 2024. But will they be enough?
We wish OpenAI showed off a bit more live demos with the latest ChatGPT-4o in what turned out to be a jam-packed, less-than-30-minute keynote. Luckily, it will be rolling out to users in the coming week, and you won’t have to pay to try it out.
Microsoft’s Copilot AI could soon help Windows 11 users deal with texting on their Android smartphone (and much more besides in the future).
Windows Latest noticed that there’s a new plug-in for Copilot (the recently introduced add-ons that bring extra functionality to the AI assistant), which is reportedly rolling out to more people this week. It’s called the ‘Phone’ plug-in – which is succinct and very much to the point.
As you might guess, the plug-in works by leveraging the Phone Link app that connects your mobile to your Windows 11 PC and offers all sorts of nifty features therein.
So, you need to have Phone Link app up and running before you can install the Copilot Phone plug-in. Once that’s done, Windows Latest explains that the abilities you’ll gain include being able to use Copilot to read and send text messages on your Android device (via the PC, of course), or look up contact information.
Right now, the plug-in doesn’t work properly, mind you, but doubtless Microsoft will be ironing out any problems. When Windows Latest tried to initiate a phone call, the plug-in didn’t facilitate this, but did provide the correct contact info, so they could dial themselves.
The fact that this functionality is very basic looking right now means Google will hardly be losing any sleep – and moreover, this isn’t a direct rival for the Gemini AI app anyway, as it works to facilitate managing your Android device on your PC desktop.
Expect far greater powers to come in the future
Microsoft has previously teased the kind of powers Copilot will eventually have when it comes to hooking up your Windows 11 PC and Android phone together. For example, the AI will be able to sift through texts on your phone and extract relevant information (like the time of a dinner reservation, if you’ve made arrangements via text).
Eventually, this plug-in could be really handy, but right now, it’s still in a very early working state as noted.
While it’s for Android only for the time being, the Phone plug-in for Copilot should be coming to iOS as well, as Microsoft caters for iPhones with Phone Link (albeit in a more limited fashion). Still, this isn’t confirmed, but we can’t imagine Microsoft will leave iPhone owners completely out in the cold when it comes to AI features such as this.
Clippy is back on the desktop, in a fashion, with the iconic assistant (originally in Office 97) coming to Windows 11 in order to debloat the OS – via a third-party utility, we should swiftly add.
The idea to resurrect Clippy comes from a German software developer Belmin Hasanovic, who has drafted in the assistant – the icon of which is an animated paperclip, as you may recall – for their Winpilot open-source utility.
Besides that new ability, Winpilot offers a range of features mainly revolving around removing various bits of bloat from Windows 11, from getting rid of default apps you might not want to more advanced tweaking.
Winpilot can also handle tightening up privacy settings, and stripping out Copilot functionality from Windows 11 if you’re really not keen on the desktop-based AI assistant (that Microsoft very definitely is keen on).
Where does Clippy come into this? It’s the Winpilot assistant that offers help, tips, and suggested options, appearing on top of the app’s interface. Clippy also lets you know what Winpilot has done to your system when it comes to debloating measures and the like.
Analysis: Clippy cheekiness
Naturally, this is all very tongue in cheek, and there’s some (intentional we presume) irony in the fact that a debloating utility has what is – let’s face it – an element of bloat. As Tom’s observes, having used the Winpilot tool, the Clippy speech bubbles obscure some of the utility’s actual interface at times, which is something the developer needs to address, surely.
The dev describes Clippy as the “manic cousin of Microsoft Copilot” and it does come out with various jokes and somewhat colorful language – so it is entertaining in some respects. But as noted, it seems like there’s work to be done in reining Clippy in.
If this tool looks at all familiar to you, it might be because it’s been around for some time. Winpilot is the new name for the old app, BloatyNosy, which in turn was previously known as ThisIsWin11. As the old BloatyNosy name suggested, this is about removing bloat and ensuring privacy, but as ever with third-party utilities, we’d be cautious about using them.
The less far-reaching tweaks Winpilot can make are likely to be fine, but when it comes to pulling out bits of the core interface of Windows 11 like Copilot, you need to tread carefully, and be wary about unintended side effects. Particularly with future Windows 11 updates, for example – where changes Microsoft makes could break things in respect of Winpilot’s tweaking (or that’s certainly not an unimaginable scenario).
Big fans of Clippy out there (you know who you are) may also be interested to learn that the assistant has been brought to the Windows 11 desktop before – not as a feature to get rid of Copilot, but rather, to replace the AI.
Google’s new generative AI model, Gemini, is coming to Android tablets. Gemini AI has been observed running on a Google Pixel Tablet, confirming that Gemini can exist on a device alongside Google Assistant… for the time being, at least. Currently, Google Gemini is available to run on Android phones, and it’s expected that it will eventually replace Google Assistant, Google’s current virtual assistant that’s used for voice commands.
When Gemini is installed on Android phones, users would be prompted to choose between using Gemini and Google Assistant. It’s unknown if this restriction will apply to tablets when Gemini finally arrives for them – though at the moment it appears not.
A discovery in Google Search's code
The news was brought to us via 9to5Google, which did an in-depth report on the latest beta version (15.12) of the Google Search app in the Google Play Store and discovered it contains code referring to using Gemini AI on a “tablet,” and would offer the following features:
The code also shows that the Google app will host Gemini AI on tablets, instead of a standalone app that currently exists for Android phones. Google might be planning on a separate Gemini app for tablets and possibly other devices, especially if its plans to phase out Google Assistant are still in place.
9to5Google also warns that this is still as it’s still a beta version of the Google Search app, Google can still change its mind and not roll out these features.
Where does Google Assistant stand?
When 9to5Google activated Gemini on a Pixel Tablet, it found that Google Assistant and Gemini would function simultaneously. Gemini for Android tablets is yet to be finalized, so Google might implement a similar restriction that prevents both Gemini and Google Assistant running at the same time on tablets. When both were installed and activated, and the voice command “Hey Google” was used, Google Assistant was brought up instead of Gemini.
This in turn contradicted screenshots of the setup screen showing that Gemini will take precedence over Google Assistant if users choose to use it.
The two digital assistants don’t have the same features yet and we know that the Pixel Tablet was designed to act as a smart display that uses Google Assistant when docked. Because Google Assistant will be used when someone asks Gemini to do something it’s unable to do, we may see the two assistants running in parallel for the time being, until Gemini has all of Google Assistant's capabilities, such as smart home features.
Meanwhile, Android Authority reports that the Gemini experience on the Pixel Tablet is akin to the Pixel Fold and predicts that Google’s tablets will be the first Android to gain Gemini capabilities. This makes sense, as Google may want to use Gemini exclusivity to encourage more people to buy Pixel tablets in the future. The Android tablet market is a highly competitive one, and advanced AI capabilities may help Pixel tablets stand out.
Google’s new family of artificial intelligence (AI) generative models, Gemini, will soon be able to access events scheduled in Google Calendar on Android phones.
According to 9to5Google, Calendar events were on Gemini Experiences Senior Director of Product Management at Google Jack Krawczyk’s “things to fix ASAP” list for what Google would be working to add to Gemini to make it a better-equipped digital assistant.
Users who have the Gemini app on an Android device can now expect Gemini to respond to voice or text prompts like “Show me my calendar” and “Do I have any upcoming calendar events?” When 9to5Google tried this the week before, Gemini responded that it couldn’t fulfill those types of requests and queries – which was particularly noticeable as those kinds of requests are pretty commonplace with rival (non-AI) digital assistants such as Siri or Google Assistant. However, when those same prompts were attempted this week, Gemini opened the Google Calendar app and fulfilled the requests. It seems that if users would like to enter a new event using Gemini, you need to tell it something like “Add an event to my calendar,” to which it should then prompt the user to fill out the details manually by using voice commands.
Going all in on Gemini
Google is clearly making progress to set up Gemini as its proprietary all-in-one AI offering (including as a digital assistant, replacing Google Assistant in the future). It’s got quite a few steps before it manages that, with users asking for features like the ability to play music or edit their shopping lists via Gemini. Another significant hurdle for Gemini to clear if it wants to become popular is that it’s only available in the United States for now.
The race to become the best AI assistant has gotten a little bit more intense recently between Microsoft with Copilot, Google with Gemini, and Amazon with Alexa. Google did recently make some pretty big strides in its ability to compress the larger Gemini models so it could run on mobile devices. The capabilities of these more complex models sound like they can give Gemini’s capabilities a major boost. Google Assistant is pretty widely recognized and this is another feather in Google’s cap. I feel hesitant about placing a bet on any single one of these digital AI assistants, but if Google continues at this pace with Gemini, I think its chances are pretty good.