Google’s AI-powered NotebookLM is now available to help organize your life

Google’s AI-powered writing assistant, NotebookLM, is leaving its experimental phase behind and launching as an official service with multiple performance upgrades.

First revealed during I/O 2023 as Project Tailwind, the tool’s primary purpose is to help organize your messy notes by creating a summary detailing what you wrote down. It will even highlight important topics plus come up with several questions to help people gain a better understanding of the material. The big update coinciding with the release is that NotebookLM will now run on Gemini Pro, which the tech giant states is their “best [AI] model” for handling a wide variety of tasks. It claims the AI model will enhance the service’s reasoning skills as well as improve its ability to understand the documents it scans.

What’s more, Google took the feedback from NotebookLM’s five-month testing period and has added 15 new features aiming to improve the user experience. 

Highlight features

The company highlights five specific features in its announcement with the first one being a “new noteboard space”. In this area, you’ll be able to take quotes from the AI chat or excerpts from your notes and pin them at the top for easier viewing. Next, citations in a response will take you directly to the source, “letting you see [a] quote in its original context.”

Highlighting text in said source will now suggest two separate actions. You can have the AI instantly “summarize the text” into a separate note or ask it to define words or phrases, which can be helpful if the topic is full of tough concepts. Down at the bottom, users will see a wider array of follow-up actions from suggestions on how to improve your prose to related ideas that you can add to your writing. NotebookLM will also recommend specific formats for your content that’ll shape it into an email layout or the outline of a script among other things.

NotebookLM sample

(Image credit: Future)

The full list can be found on Google's Help website. Other notable features include an increased word count for sources (they can now be 200,000 words total), the ability to share notebooks with others much like Google Docs, and support for PDFs.

Coming soon

There are more updates on the way. Starting on the week of December 11, NotebookLM will gain an additional seven features. They include a Critique function where you can ask the AI for constructive feedback plus a way to combine all your notes into one big page.

NotebookLM is available in the United States only to users ages 18 and up on desktop and mobile devices. When you visit, you’ll see some examples to help you get started with the service. It’s worth mentioning that despite this being an official launch, Google still regards NotebookLM as “experimental” technology, so it won’t be perfect. No word on if there are plans for an international release although we did ask. This story will be updated at a later time. 

While we have you check out Techradar's roundup of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

I used TripAdvisor’s new AI-powered hotel summary to plan my next vacation

TripAdvisor is now using AI to summarize vast swathes of user hotel reviews from its site to provide tailored itinerary suggestions based on your preferred amenities or features. 

This differs from TripAdvisor's typical aggregation by using AI to validate personal preferences with specific details about the hotel. With the still-in-beta system, you can submit a destination, dates, and the kind of experience you’re looking for and TripAdvisor will plan a full-day itinerary with suggested hotels.

TripAdvisor

(Image credit: Future)

Let's travel the AI way

I tested this out by planning a trip to Spokane, WA. I was looking for a hotel that is central to all of the main attractions, yet close to outdoor activities. TripAdvisor was able to point out details such as which hotels had better windows or A/C units based on the time of year I would be traveling.

It also was able to weigh in through text field options on what kinds of food options were available within a half-mile walk. As a foodie, I feel like that is such an incredible detail, especially since I tend to travel with friends who have Celiac disease or other dietary restrictions.

TripAdvisor

(Image credit: Future)

Knowing this hotel summary works for smaller cities, like Spokane, I was curious to see how it worked in a larger city with an abundance of options.

Let’s try this with a staycation in Manhattan! I mentioned to the AI that I didn’t like fluorescent lights. Admittedly a very strange request for a hotel, but TripAdvisor's AI was able to find reviews that focused on lighting.

Knowing Manhattan prices, the options given were pretty exorbitant for my budget. I asked for hotels under $ 300/night. It returned hotels averaging $ 230/night. Overall the tool seems responsive to feedback.

TripAdvisor

(Image credit: Future)

The AI was able to refine my search for more affordable places that still had access to some of the attractions I wanted to see such as Lincoln Center or Central Park West.

While this is great for basic cursory searches, I still personally like to have more control over my results. I would love, for example, to be able to see more than three options for hotels.

Overall, this looks like a decent addition to TripAdvisor's growing lineup of AI-powered features.

You might also like

TechRadar – All the latest technology news

Read More

Google Photos now shows you an AI-powered highlights reel of your life

The Google Photos app is getting a redesigned, AI-powered Memories feature to help you find your life's highlights among the clutter of your everyday snaps and videos.

The Memories carousel, which Google says is used by half a billion people every month, was introduced four years ago at the top of the Android and iOS app. It automatically picks out what it considers to be your most important photos and videos, but Google is now making it a more central part of the app. 

From today in the US (other regions in the “coming months”) the Memories feature is moving to the bottom of the app's navigation bar and getting some useful new tricks. One of these is the ability to “co-author” Memories albums with friends and family, in a similar way to shared albums. 

This feature sounds particularly handy for big events like weddings, as you'll be able to invite friends or family to collaborate on specific Memories and add their photos and videos to help fill in the gaps. You can also save any Memories that are shared with you to your Google Photos account.

Google is also promising to add a couple of other new features to Memories soon. If you're struggling to think of a title for your collection of snaps (which we can't imagine is a major issue) then you'll be able to use generative AI to come up with some suggested names like “a desert adventure”. This is apparently an experimental feature, and only currently available to “select accounts in the US”.

Perhaps more useful will be the option of sharing your Memories as videos, which means you'll be able to send them to friends and family who aren't on Google Photos in messaging apps like WhatsApp. Google says this is “coming soon”, but unfortunately hasn't yet given a rough timescale. Knowing Google, that could be anything from three months to three years, but we'll update this article when we hear something more specific.

Google upgrades the photo album

An Android phone on an orange background showing a photo of a kitten being shared in the Google Photos Memories feature

You can now share Memories albums with other Google Photos users in the updated version of the app (above). (Image credit: Google)

While these are only minor tweaks to the Google Photos app, they do show that Google increasingly sees its cloud photo service as a social app.

The ability to “co-author” Memories albums is something that'll no doubt be used by millions for events like weddings, vacations, pets, and celebrations. And as Google Photos isn't used by everyone, the incoming option to share Memories as videos to WhatsApp groups and other messaging apps should also prove popular.

On the other hand, these AI-powered photo albums have also sparked controversy with their sometimes insensitive surfacing of unwanted memories and photos. Google says that its updated Memories view lets you quickly add or remove specific photos or videos, or hide particular memories, to help with this.

On the whole, the Memories feature is certainly an upgrade to having to pass around a physical photo album, and its AI powers will no doubt improve rapidly if half a billion people continue to use it every month. If it works as well as it does in the demos, it could effectively be an automated highlights reel of your life.

TechRadar – All the latest technology news

Read More

Photoshop has a new AI-powered fix for your compositional mistakes

Photoshop has matched its growing number of AI-powered rivals with a new feature that lets you expand images with a few clicks.

Like the existing Generative Fill, the new Generative Expand feature – available to Photoshop (Beta) users – lets you extend an existing image in any direction, with the software filling in the extra space either automatically or based on the guidance of your prompts.

If you already know how to use Generative Fill in Photoshop, then you'll quickly be able to pick up Generative Expand. It's based on the same AI generative tech, but instead works with the Crop tool (rather than the Marquee or Lasso tools).

In fact, it's a bit like using the Crop tool in reverse. Rather than cropping images smaller, the feature lets you fix compositional mistakes or effectively switch to a wider angle view. You just select the Crop tool from the toolbar, go to the Fill menu in the Options bars, and select Generative Expand.

Once you've done that, it's a case of dragging the corner and side handles to the size you want, then clicking the 'Generative Expand' button. You can leave Photoshop to guess what it should added to the image (which it's very good at, based on our experiences with Generative Fill) or add a text prompt to give it a helping hand.

From the demos so far, it looks like another very useful AI feature, with photo-realistic results. Like its rivals – for example, Midjourney's 'outpainting' feature – there is are potential mistakes and artifacts, but Photoshop also gives you three variations of your expanded image to choose from.

Adobe now has a full guide to using Generative Expand – and if you aren't already using Photoshop (Beta), you can download it by going to your Creative Cloud download app and clicking 'Beta apps' on the left-hand side.

Playing AI catchup

A whole range of apps have been offering image expansion, or 'outpainting' as Midjourney calls it, for a while now – so it may look like Photoshop is a bit behind the curve here.

But with the launch of Firefly – Adobe's family of tools to take on Midjourney and Dall-E – the company has shown it's keen to tread carefully in the world of AI image generation. This is understandable given the emergence of class-action lawsuits against the likes of Midjourney from artists, who claim some models are based on copyrighted works.

Adobe has been keen to stress that Firefly-powered tools like Generative Expand and Fill have been trained on Adobe Stock images, openly-licensed content, or public domain content where the copyright has expired. 

This means that pros who uses the likes of Photoshop can use its generative AI tools without any concerns about copyright infringement, even if those tools are a bit slower to roll out than on some rivals.

Google is also happy with Adobe's more ethical approach to AI, with the search giant announcing back in May that it would be integrating Firefly image generation into its Bard chatbot. So far, that still hasn't rolled out, so we've asked Adobe for an update on when that will happen and will update this article when we hear back. 

TechRadar – All the latest technology news

Read More

YouTube video translation is getting an AI-powered dubbing tool upgrade

YouTube is going to help its creators reach an international audience as the platform plans on introducing a new AI-powered dubbing tool for translating videos into other languages.

Announced at VidCon 2023, the goal of this latest endeavor is to provide a quick and easy way for creators to translate “at no cost” their content into languages they don’t speak. This can help out smaller channels as they may not have the resources to hire a human translator. To make this all possible, Amjad Hanif, vice president of Creator Products at YouTube, revealed the tool will utilize the Google-created Aloud plus the platform will be bringing over the team behind the AI from Area 120, a division of the parent company that frequently works on experimental tech.

Easy translation

The way the translation system works, according to the official Aloud website, is the AI will first transcribe a video into a script. You then edit the transcription to get rid of any errors, make clarifications, or highlight text “where timing is critical.” From there, you give the edited script back to Aloud where it will automatically translate your video into the language of your choice. Once done, you can publish the newly dubbed content by uploading any new audio tracks onto their original video.

A Google representative told us “creators do not have to [actually] understand any of the languages that they are dubbing into.” Aloud will handle all of the heavy lifting surrounding complex tasks like “translation, timing, and speech synthesis.” Again, all you have to do is double-check the transcription. 

Future changes

It’s unknown when the Aloud update will launch. However, YouTube is already working on expanding the AI beyond what it’s currently possible. Right now, Aloud can only translate English content to either Spanish or Portuguese. But there are plans to expand into other languages from Hindi to Indonesian plus support for different dialects.

Later down the line, the platform will introduce a variety of features such as “voice preservation, better emotion transfer, and even lip reanimation” to improve enunciation. Additionally, YouTube is going to build in some safeguards ensuring only the creators can “dub their own content”.

The same Google representative from earlier also told us the platform is testing the Aloud AI with “hundreds of [YouTube] creators” with plans to add more over time. As of June 2023, over 10,000 videos have been dubbed in over 70 languages. 

You can join the early access program by filling out the official Google Docs form. If you want to know what an Aloud dub sounds like, go watch the channel trailer for the Amoeba Sisters channel on YouTube. Click the gear icon, go to Audio Track, then select Spanish. The robotic voice you’ll hear is what the AI will create. 

TechRadar – All the latest technology news

Read More

Photoshop’s new AI-powered tricks will fix your biggest mistakes

Adobe Photoshop has been the gold standard in photo editing for over three decades, so it was only a matter of time until it embraced the tricks seen in the best AI art generators – and it's now done just that in the form of a new tool called Generative Fill.

The new tool, which lets you extend images or add objects to them using text prompts, certainly isn't the first AI-powered feature we've seen in Photoshop. Generative Fill is also a user-friendly development of existing Adobe tools, like Content Aware Fill, but it's also one of the most significant new Photoshop features we’ve seen for years.

That’s because it leans on the power of Adobe Firefly, the company’s new generative AI engine, to help you fix big compositional mistakes or completely reinvent an image’s contents. In Adobe’s demos, images in portrait orientation are instantly turned into ones in landscape – with Photoshop simply inventing the sides of the photo based on the original image.

While some of those examples are quite subtle, others have a very obvious art aesthetic. For example, a photo of a corgi is turned into one with very obviously fake bubbles and a van in the background. 

Adobe clearly sees Generative Fill as a tool for both beginners and pros, but the new text-to-image prompt box is certainly a useful touch for those who don’t know Photoshop’s existing tools. You can use this to add small details to an image or completely change its background – in another demo, a deer is moved from its forest background to a city thanks to the prompt ‘wet alley at night’.

Of course, none of this will be new to fans of Midjourney or Dall-E, which have helped spark this year’s boom in text-to-image generation. 

But Adobe is keen to stress that AI tools like Generative Fill model have only been trained on Adobe Stock images, openly-licensed content, and public domain content where the copyright has expired. This means they can be used for commercial use without the threat of class-action lawsuits from artists who claim some AI models have stolen their work.

While Generative Fill is only rolling out to the full Photoshop app in the “second half of 2023”, there are a couple of ways you can try it out now. First, it’s available in Photoshop’s desktop beta app, which you can get by going to the Creative Cloud desktop app, choosing Beta apps in the left sidebar, then installing it from there.

The feature is also available as a module within the web-only Adobe Firefly, which was also recently added to Google Bard. To use Firefly in Bard, you can simply write your image request (for example, 'make an image of a unicorn and a cake at a kid's party') and it'll do the rest. What a time to be alive. 


Analysis: Photoshop battles its new AI rivals

A corgi dog running through a puddle below bubbles

Some of the effects created by the Firefly-powered Generative Fill are more clearly AI-generated. (Image credit: Adobe)

Like Google, Adobe is a giant incumbent that's under attack from AI upstarts like OpenAI and Midjourney. While Firefly and Photoshop's Generative Fill aren't doing things we haven't seen before, they are doing them in a measured way that sidesteps any copyright issues and helps maintains its reputation.

Photoshop's embrace of generative AI also brings these tools fully into the mainstream. The image editor may not be the dominant force it was before the likes of Canva, Affinity Photo and GIMP arrived to offer more affordable alternatives, but it remains one of the best photo editors around and certainly one of the most widely used.

From Adobe's early demos, it looks like Generative Fill is in its early days and produces mixed results, depending on your tastes. In some images, the effects are subtle and realistic, while in others – particularly images where large parts of entirely AI-generated – the results are clearly AI-generated and may not date very well.

Still, the arrival of Generative AI alongside other new features like the Remove Tool –another development of the Photoshop's existing ability to let you eliminate unwanted objects – is only a good thing for those who aren't familiar with the app's sometimes arcane interface. 

And it's another step towards the AI tools, like DragGAN, that will completely change photography as we know it.        

TechRadar – All the latest technology news

Read More

This AI-powered Photoshop rival is the end of photography as we know it

Photoshop has been steadily adding AI-powered tools to its menus in recent years, but an incredible new demo from an independent research team shows where the best photo editors are heading next.

DragGAN may not be a fully-fledged consumer product yet, but the research paper (picked up on Twitter by AI researchers @_akhaliq and @icreatelife) shows the kinds of reality-warping photo manipulation that's going to be possible very soon. This AI-powered tech will again challenge our definition of what a photo actually is.

While we've seen similar photo editing effects before – most notably in Photoshop tools like Perspective Warp – the DragGAN demo takes the idea and user interface to a new level. As the examples below show, DragGAN lets you precisely manipulate photos to change their subject's expressions, body positions and even minor details like reflections.

The results aren't always perfect, but they are impressive – and that's because DragGAN (whose name is a combination of 'drag' and 'generative adversarial network') actually generates new pixels based on the surrounding context and where you place the 'drag' points.

Photoshop's neural filters, particularly those available in the app's beta version, have dabbled in similar effects for a while, for example giving you sliders for 'happy' and 'anger' expressions for tweaking portrait images. DxO software like Photolab also has U Point technology that lets you point at the part of a photo that you'd like to make local adjustments on.

But the power of the DragGAN demo is that it combines both concepts in a pretty user-friendly way, letting you pick the part of a photo you want to change and then completely changing your subject's pose, expression and more with very realistic results. 

When a refined version of this technology ultimately lands on smartphones, imperfect photos will be a thing of the past – as will the idea of a photo being a record of a real moment captured in time.

DragGAN also offers more granular controls, too. If you don't want to change the entire photo, you can apply a mask to a particular area – for example, your dog's head – and the algorithm will only affect that selected area. That level of control should also help reduce artifacts and errors.

The research team has also promised that in the near future it plans “to extend point-based editing to 3D generative models.” Until then, expect to see this kind of reality-warping photo editing improve at a rapid pace in some of the best Photoshop alternatives soon. 


Analysis: The next Photoshop-style revolution

A woman sitting on a beach in an early version of Photoshop

An early demo of the first version of Photoshop, showing the iconic ‘Jennifer in Paradise’ photo being edited. (Image credit: Adobe)

These AI-powered photo editing tricks have echoes of the first early demos of Photoshop over 35 years ago – and will likely have the same level of impact, both culturally and on the democratization of photo editing.

In 1987, the co-creator of Adobe Photoshop John Knoll took the photo above – one of the most significant of the last century – on a Tahiti beach and used it to demo the incredible tools that would appear in the world's most famous photo editing app.

Now we're seeing some similarly momentous demos of image-manipulating tools, from Google's Magic Eraser and Face Unblur to Photoshop's new Remove Tool, which lets you remove unwanted objects in your snaps.

But this DragGAN demo, while only at the research paper phase, does take the whole concept of 'photo retouching' up a notch. It's reforming, rather than retouching, the contents of our photos, using the original expression or pose simply as a starting point for something completely different.

Photographers may argue that this is more digital art than 'drawing with light' (the phrase that gives photography its name). But just like the original Photoshop, these AI-powered tools will change photography as we know it – whether we want to embrace them or not. 

TechRadar – All the latest technology news

Read More

Microsoft could be working on an AI-powered Windows to rival Chrome OS

Microsoft is reportedly working on a new version of its ever-successful Windows operating system – but we’re not talking about Windows 12, no sir. Instead, this is ‘CorePC’, a new project from Microsoft designed to take on Google’s ultra-efficient Chrome OS.

That's according to the good folks at our sister site Windows Central, whose sources claim the idea is to create a modular iteration of Windows, which Microsoft could then tweak and customize into different ‘editions’ that better suit specific hardware. This new version of Windows would be less resource-intensive than previously, hopefully.

CorePC (bear in mind this is a codename, and will likely not be the name of the finished OS) is rumored to also have one more trick up its sleeve: AI. Of course it’s AI – we shouldn’t be shocked, given Microsoft’s current hyperfixation on shoving popular chatbot ChatGPT into everything from the Microsoft 365 suite to the Bing search engine. Details are thin on what exactly artificial intelligence will bring to the table here, but it’s claimed to be a focus of the CorePC project.

Opinion: This could actually be really good – if Microsoft stays the course

Though this is no more than a rumor at this stage, it makes a lot of sense. For starters, this wouldn't be the first time Microsoft had experimented with building a lightweight version of Windows. 

The Windows 10X program, for instance, was supposed to be a stripped-back version of Windows 10 that cut down on features in favor of faster operation and better system security. Unfortunately for us, it was eventually canceled in 2021 and the OS never made it to our devices. There was also Windows Lite, a 2018 effort to build a lightweight Windows, which also never really saw the ‘lite’ of day.

I genuinely hope that CorePC doesn’t meet the same fate; the idea of a low-system-requirement version of Windows is an attractive one right now, with Chrome OS slowly encroaching in the budget hardware space. Hell, half of the products on our best cheap laptops list are Chromebooks at this point, and I’m a lifelong Windows devotee – I even owned a Windows phone back in the heady days of 2015 (this one, for anyone interested).

If the CorePC project specifically has the aim of creating a modernized version of Windows that can be easily adjusted to run smoothly on any device, that would be welcome. While I don’t think it will lead to the glorious return of Windows phones (a man can dream though, right?), it’d be great to see Chromebook-esque Windows laptops and tablets.

What exactly can we expect from CorePC?

Digging into the details a bit, it seems that Microsoft has an internal version of CorePC Windows already in testing. It’s barebones, running only the Edge browser with Bing AI, the Microsoft 365 suite, and Android apps – similar to how Chrome OS got access to apps from the Google Play Store back in 2016. This version of Windows is designed for super-affordable PCs and laptops designed to be used in educational environments.

That might not sound very exciting, but here’s the good part: this test build supposedly uses as much as 75% less storage space than Windows 11 and uses a split-partition install process that allows for faster updates, safer system resets, and better security thanks to dedicated read-only partitions the user (or any third-party apps) can’t access. It’s unclear at this point whether this new version runs on a conventional 64-bit structure or if it’s a more limited ARM-based build.

Considering that Windows 11 already uses between 20 and 30 gigabytes of storage space and Windows 12 looks to be jacking up the system requirements even further, the idea of a super-compact Windows edition is quite attractive – especially for use cases in education and enterprise spaces, where security is vital and a limited feature set won’t be a hurdle to everyday usage.

We’ve already seen Windows 11 scaled down for low-end hardware in the unofficial ‘Tiny11’ OS, so it’s not entirely surprising that Windows is seemingly working on an official version. Though there’s no projected release date, speculation points to 2024 so the release can coincide with the expected launch of Windows 12. In any case, I've got my fingers crossed!

TechRadar – All the latest technology news

Read More

Zoom expands AI-powered tools for salespeople

As more and more businesses continue to return to offices, Zoom is looking to cement its position as one of the best enterprise apps out there. 

At its recent Work Transformation Summit, the  video conferencing giant unveiled Zoom IQ for Sales, an AI-powered service for salespeople that is able to quickly analyze sales meetings and produce insights. 

Zoom IQ is an add-on for Zoom Meetings and aims to help organisations become more efficient by highlighting important insights that might otherwise be missed. The service is integrated in Zoom (naturally), Salesforce, and other enterprise services. 

A platform for work 

“Zoom is always searching for ways to help our customers elevate their end customers’ experience and Zoom IQ for Sales is the latest development in that journey,” Zoom's Josh Dulberger told TechCrunch

“Zoom IQ for Sales … [can] identify opportunities, assess risks and ultimately enable and improve sales team performance. It uses natural language processing models to process post-meeting transcripts and deal progress data, generating insights for sales reps and managers.”

Zoom IQ for Sales is available now as an add-on for Zoom Meetings customers with the company adding that support for Zoom Phone is also coming soon

While Zoom was an early winner from the pandemic, its fortunes since have been mixed. The company's stock, which rocketed up and to the right in 2020, has come back to earth with a thud. 

After reaching a high of $ 559 in October 2020, Zoom now trades at $ 114, a not unrespectable figure but a far cry from where it has been. 

Efforts like IQ for Sales are Zoom's attempt at expanding its usefulness to organisations beyond purely video calls – and compete with Microsoft Teams.

After being a little slow off the block, Microsoft went into overdrive to build and promote Teams, which seamlessly integrated with the broader Microsoft 365 suite. 

Microsoft's ploy worked: Teams now has over 270 million monthly active users, according to the company, up from around 250 million in July 2021. 

Whether Zoom can catch up remains to be seen, but tools like IQ for Sales are a very good starting point. 

TechRadar – All the latest technology news

Read More