It’s no secret that Apple is working on generative AI. No one knows what it’ll all entail, but a new leak from AppleInsider offers some insight. The publication recently spoke to “people familiar with the matter,” claiming Apple is working on an “AI-powered summarization [tool] and greatly enhanced audio transcription” for multiple operating systems.
The report states these should bring “significant improvements” to staple iOS apps like Notes and Voice Memos. The latter is slated to “be among the first to receive upgraded capabilities,” namely the aforementioned transcriptions. They’ll take up a big portion of the app’s interface, replacing the graphical representation for audio recordings. AppleInsider states it functions similarly to Live VoiceMail on iPhone, with a speech bubble triggering the transcription and the text appearing right on the screen.
New summarizing tools
Alongside VoiceMemos, the Notes app will apparently also get some substantial upgrades. It’ll gain the ability to record audio and provide a transcription for them, just like Voice Memo. Unique to Notes though, is the summarization tool, which will provide “a basic text summary” of all the important points in a given note.
Safari and Messages will also receive their own summarization features, although they’ll function differently. The browser will get a tool that creates short breakdowns for web pages, while in Messages, the AI provides a recap of all your texts. It’s unknown if the Safari update will be exclusive to iPhone or if the macOS version will get the same capability, but there’s a chance it could.
Apple, at the time of this writing, is reportedly testing these features for an upcoming release on iOS 18 later this year. According to the report, there are plans to update the corresponding apps with the launch of macOS 15 and iPadOS 18; both of which are expected to come out in 2024.
Extra power needed
It’s important to mention that there are conflicting reports on how these AI models will work. AppleInsider claims certain tools will run “entirely on-device” to protect user privacy. However, a Bloomberg report says some of the AI features on iOS 18 will instead be powered by a cloud server equipped with Apple's M2 Ultra chip, the same hardware found on 2023’s Mac Studio.
The reason for the cloud support is that “complicated jobs” like summarizing articles require extra computing power. iPhones by themselves, may not have the ability to run everything internally.
Regardless of how the company implements its software, it could help Apple catch up to its AI rivals. Samsung’s Galaxy S24 has many of these AI features already. Plus, Microsoft’s OneNote app can summarize information thanks to Copilot. Of course, take all these details with a grain of salt. Apple could always change things at the last minute.
Be sure to check out TechRadar's list of the best iPhones for 2024 to see which ones “reign supreme”.
If you’re a Windows 11 user on a PC, you’ll soon be able to use your Android smartphone (or tablet) as a webcam. This feature is currently being made available to Windows Insiders, Microsoft’s official community for professionals and Windows enthusiasts who would like early access to new Windows versions and features to test and offer feedback ahead of a wider rollout.
In an official Windows Insider Blog post, Microsoft explains that it’s begun a gradual rollout of the feature that enables users who have a suitable Android device, such as a tablet or phone, to act as a webcam while using any application that involves video webcam functions on their PCs. If you’d like to try this new feature or get access to whatever else Microsoft has up its sleeve that it would like users to test, it’s free to sign up for the Windows Insider Program – you just have to make sure you have a suitable PC that can run Windows 10 or Windows 11.
Once you install the latest preview build, you’ll also have to ensure that the mobile device you want to use as a webcam is running Android 9.0 or later. You also have to install the Link to Windows app on your mobile device.
This is really good news for users who don’t have a dedicated webcam or are unhappy with the quality of the built-in webcam of their laptop. Many modern smartphones come with cameras that can offer better quality than a lot of webcams – and this feature allows them to be used wirelessly, which makes them far more convenient as well. On top of being able to function as your webcam, you can also switch between the front and back cameras of your phone, pause your webcam stream, and activate your mobile device’s available camera effects.
How to set up your Android phone as your webcam
Once you’ve made sure you have all the necessary specifications, updates, and apps, you’ll need to set the feature up on the device you’d like to stream to. You can do this by navigating to the following settings in Windows 11:
Settings > Bluetooth & devices > Mobile devices
Select “Manage devices” and turn on the setting that allows the Android mobile device that you’d like to use as a webcam to be accessed by your PC. This will then prompt your PC to receive a Cross Device Experience Host update via the Microsoft Store which you should allow, as this is necessary to facilitate the feature.
It will likely prove to be very useful, offering users more versatility and options for appearing in video calls. With many of us now working from home, either full-time or as part of a hybrid working week, picking the best webcam for your needs is now more important than ever. This upcoming feature could make that search even easier if all you need is a modern Android smartphone.
ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.
It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.
Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.
OpenAI Sora release date and price
In February 2024, OpenAI Sora was made available to “red teamers” – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.
“We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” says OpenAI.
In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it.
We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.
It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.
As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $ 20 (about £16 / AU$ 30) per month.
But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.
What is OpenAI Sora?
You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.
OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like “woman walking down a city street at night” or “car driving through a forest” and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.
To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like “a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand”.
How does OpenAI Sora work?
On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.
Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on “internet-scale data” to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.
So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.
Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.
And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.
What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.
What can you do with OpenAI Sora?
At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.
“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.
It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.
OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.
In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.
Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.
How can you use OpenAI Sora?
At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.
Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.
Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.
Three years after it first launched, Meta has decided to disable Facebook's and Instagram's cross-messaging feature.
The company introduced cross-app chats back in 2020, letting users from the two platforms talk to each other with ease. There were even plans to extend the interoperability with CEO Mark Zuckerberg saying at one point he wanted to have all of Meta’s messaging apps working together. But those dreams have been squashed as a recently updated Instagram Help Center page states communication is ending sometime in “mid-December 2023”. An exact date was not given.
The support website lays out what’ll happen after deactivation. In addition to being unable to “start new conversations or calls”, all pre-existing chats made with a Facebook account will now become read-only. Facebook users, in turn, will not be able to see an Instagram profile’s Activity Status or view any read receipts. Plus, Meta will not be moving any conversations to Messenger. If you want to begin a new chat, you’ll have to start from scratch on the respective platform.
Prepping for the future
Currently, we have no idea why this is happening. Meta has yet to make an official announcement explaining the move. However, 9To5Google theorizes it may have something to do with Europe's Digital Market Act (DMA).
To give you a crash course, the European Union passed the DMA in 2022 as a way to prevent major tech corporations (or “gatekeepers” as the bill calls them) from gaining a monopoly over the tech industry. One of the provisions within the law is that these large companies must “offer interoperability between messaging platforms” and fall under the EU's purview. It’s important to point out that Meta has been scaling back its Messenger service for some time now, including ending support for the SMS standard and shutting down Messenger Lite.
The company might instead prop up WhatsApp as its main, DMA-compliant messaging service. WABetaInfo found evidence of this last September, with Meta working on allowing WhatsApp users to send texts to third-party apps. No word on when this support will officially be released, but it could be soon. Every corporation designated as a gatekeeper by the DMA must comply with the law by March 6, 2024.
We reached out to Meta asking if they could give an exact date on when the cross-chat feature will go offline and explain why they’re doing this. The story will be updated at a later time.
Microsoft is giving users more control over what’s installed with Windows 11, and how its own services are embedded in the OS – but the catch is this is just happening in Europe (specifically the European Economic Area or EEA).
Windows Central stumbled across a blog post from Microsoft describing the changes being made, and noting that many of these are to comply with the Digital Markets Act in the EEA.
Furthermore, it’s possible to banish Bing results from the Windows search box in the taskbar. These are the web search results that pop up, whatever you’re looking for, and then fire up Bing in Edge if clicked.
On top of that, with the Widgets panel, Microsoft is allowing for the news and adverts feed to be switched off, so the board will purely play host to widgets (imagine that – and you’ll have to imagine it, sadly, outside of the EEA).
European users will also be asked if they want to sync up their Microsoft account with Windows 11 (rather than it just happening by default).
And finally, in the EEA if you click a browser link, it will open in your default browser – meaning that Microsoft’s own software will no longer insist on firing up Edge regardless of your preference (which makes sense, as if you’ve uninstalled Edge, that could get tricky).
When will all this happen? The changes are rolling out in preview now for Windows 11, and will follow for Windows 10 too, with the aim being to achieve compliance with the European regulations and deploy to consumers by March 6, 2024.
Analysis: Choices for everyone? Not likely
Clearly these are customization options which many Windows 11 users would love to benefit from. Especially the ability to have the widgets board with no distractions, just pure widgets, as well as having links open in your default browser always, without fail, and unhooking Bing from the taskbar search box.
Let’s face it, in the latter case, when you’re quickly searching for a Windows setting, you don’t want to be spammed with meaningless web search suggestions that try to get you to open up Edge (and Bing.com).
Will these choices – and other perks like the ability to remove the Edge browser – come to other regions outside of the EEA? Well, that seems very unlikely, seeing as Microsoft is having its hand forced here, and it’s about complying with regulations (that aren’t in place elsewhere) rather than genuinely catering to the wants and needs of users. So, a wider expansion of these options seems a forlorn hope, sadly.
Remember that Copilot is not available in the EEA yet, and this is due to regulatory issues – so these moves could well be tied up in Microsoft’s overall scheme of things for deploying the AI to Windows 11 users in this region.
As Microsoft puts it: “We look forward to continuing to work with the European Commission to finalize our compliance obligations.” And we take that to mean Copilot shouldn’t be too far off for the EEA, with any luck for those who live there.
The one positive for the rest of the world is that at least the streamlining of the default app roster in Windows 11 is happening for everyone. This is something Microsoft has been working on for some time, giving users the ability to remove the likes of the Photos and Camera apps, and the Tips app plus Steps Recorder are to be axed, plus the Maps and Movies & TV clients have been dropped. Thankfully those streamlining efforts count for everyone, and should be an ongoing drive, we reckon.
Google is taking on Microsoft at its own game as the tech giant has begun testing its own image generation tool on the AI-powered Search Generative Experience (SGE).
It functions almost exactly like Bing Chat: you enter a prompt directly into Google Search, and after a few seconds, four images pop out. What’s unique about it is you can choose one of the pictures and develop it even further by editing its description to add more detail. Google gives the example of asking SGE to generate “a photorealistic image of a capybara” cooking breakfast in the forest. The demo then shows you how to alter specific aspects like changing the food the animal is cooking, from bacon to hash browns, or swapping out the backdrop from trees to the sky.
Google Search now has the capability to produce images from prompts, indicating the integration of a #DALLE3 or @midjourney alternative! 🔥- spotted by @SaadhJawwadh #GenAI #SGE pic.twitter.com/CMZQu8FXuEOctober 12, 2023
See more
This feature won’t be locked to just Google Search as the company states you might “see an option to create AI-generated images directly in Google Images”. In that instance, one of the image search results will be replaced with a button offering access to the engine. The creation will slide in from the right in its own sub-window.
Limitations
There are some restrictions to this experiment. SGE includes safeguards that will block content that runs counter to the company’s policy for generative AI. This includes, but is not limited to, promoting illegal activities, creating misinformation, and generating anything sexually explicit that isn’t educational or “artistic”. Additionally, every picture that comes out will be marked with “metadata labeling” plus a watermark indicating it was made by an AI.
Further down the line, AI content will receive its own About This Image description giving people important context about what they’re looking at. Google clearly does not want to be the source of misinformation on the internet.
Google states in the announcement this test is currently only available in English to American users who have opted into the SGE program. You also must be 18 years or older to use it. What isn’t mentioned is that not everyone will be given access. This includes us, which is why we’re unable to share our creations with you.
Besides pictures, you can ask SGE to write up drafts for messages or emails if you’re not very good with words. Google gives the example of having the AI “write a note to a contractor asking for a quote” for renovating a part of your house. Once that’s done, you can take the draft into either Google Docs or Gmail where you can tweak it and give it your voice. The company states this particular content has the same level of protection as everything under the Google Workspace umbrella, so your data is safe.
Like the image generation, SGE drafts are rolling out to American users in English. No word if there are plans for an international release, although we did ask.
The Apple Vision Pro is probably the most ambitious product to be announced this year, combining an interesting design and staggering price tag with some exciting-sounding tech that could make it really stand out from other virtual reality headsets (not that Apple wants the Vision Pro to be considered a VR headset). It features two 4K microLED panels that at the moment are only sourced from Sony, with the manufacturer capping annual display production at just a million units dedicated to the headset.
This puts a spanner in the works, as not only will Apple be unable to produce as many units as possibly needed, but it also means the company has no negotiating power with component prices, as only Sony is making them. However, it seems like two Chinese suppliers are currently being evaluated to produce the microLED technology, which could enable mass production and hopefully, a cheaper model.
According to The Information, two people “with direct knowledge of the matter” claim that Apple is “testing advanced displays” by two companies for possible inclusion in future models.
A source cited in the article also hints at the possibility of a non-pro, more financially accessible, version of the Vision headset, stating that Apple is evaluating BOE’s and SeeYa’s – the two companies mentioned above – displays for future models of both the Vision Pro and a cheaper headset internally code-named N109, which The Information previously reported was in an early stage of development.
The cheaper the better
Apple already uses BOE for iPad and iPhone displays, so there is a good chance that they would collaborate again for Vison Pro panels. When the augmented reality headset was announced in June of this year, the steep price tag of $ 3,500 caused concern about who could actually afford to buy one.
In a time when people are concerned with the cost of living, who is this device actually for? During WWDC 2023 many people felt there was no clear audience for the Vision Pro, and at $ 3,500 not many people would be willing to shell out just to give the experimental technology a try.
Hopefully, as Apple searches for cheaper display manufacturers and considers a more ‘basic’ Vision headset, it will give more people a chance to try out the impressive tech. Obviously, a cheaper alternative will have watered-down features, but I would rather spend half the price on a headset I can afford, that may be missing a few features than to be completely priced out of such exciting tech.
Creating all kinds of documents with Google Docs could now prove a lot easier thanks to a new update.
The word processor tool from Google Workspace is now leveraging a boost in its smart chips technology to be able to create different types of specialized documents such as invoices or contracts.
Far from having to manually input and tweak your document to get it into exactly the right format, Google Docs users will now be able to set pre-defined items and placeholders, with the software automatically creating the type of file needed.
Google Docs smart chips
“Today, we’re introducing variable chips, a new feature that makes document creation for things like invoices, contracts, or broader communications much easier,” a Google Workspace update blog post announcing the news said.
Users will be able to pre-define and insert placeholders such as a client name, contract number, or an address, and then update it throughout their entire document simply by editing the value in one place.
The update is available now, with no admin control necessary for business users. It will be available to Google Workspace Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Plus customers and Nonprofits only, meaning users with personal Google accounts won't get access. Rollout has started now, with users set to see the new feature over the next few weeks.
Opinion – a possible Google Docs game-changer?
As someone who creates all kinds of different types of documents within Google Docs, getting the right format and layout is often one of the trickest things to nail down – whether its a news article, a formal letter, or a contract, everything needs to be formatted in the correct way.
This launch shows Google Docs paying heed to such concerns in a way that Microsoft Word and other competitors are still yet to fully do, and could be a game-changer for workers around the world. Spelling an end to fiddly manual editing processes, the use of smart chips for intelligent editing and formatting could be incredibly valuable, and I'm all for it.
Coming on the heels of other new features such as collapsible headings, which make longer documents much easier to consume, and tweaks to tables of contents, Google Docs is finally becoming a true tool for all players.
Google is looking to up its game in the video conferencing space with the launch of several new AI-powered tools and services.
The company has revealed that Google Meet is getting some new AI boosts, aimed at making it a core part of your everyday working life, but also one that reflects you.
Revealed at Google I/O 2023, the tools use the newly-announced Duet AI for Google Workspace platform to allow users to generate their own customized virtual backgrounds based on text descriptions, opening up a whole world of possibilities.
Google Meet Duet AI
Google had hinted at plans for generative AI backgrounds in Meet earlier this year as part of its big Workspace AI push, but this marks the first time we've seen the technology in action.
In a demo at Google I/O, the company was able to demonstrate how just a few words could generate detailed backgrounds that let users show off a bit more personality whilst on a call. The company also mentioned potential examples such as a salesperson tailoring their background depending on which prospective client they are meeting with, or a manager celebrating the employee of the month with a personalized background of their favorite things in a team call.
“It's a subtle, personal touch to show you care about the people you're connecting with and what's important to them,” a Google blog post announcing the changes noted.
Duet AI is already a central background presence across Google Workspace, working in the background to assist on tasks such as writing Gmail messages or giving you prompts in Google Docs.
Along with Meet, the system is also be geared up for use across other key Google Workspace services such as Slides and Sheets as the company looks to make all your working tools smarter and more intuitive.
Opinion – enough to triumph over Microsoft Teams?
AI is everywhere in the software world right now, as companies of all sizes scramble to include the technology in their processes and platforms.
Video conferencing should be an ideal place for AI to make a real impact, boosting signal strength and call quality. But personalization is also another way for this new era of technology to make a difference. Now we're all used to video calls, making them more bearable is the next step, and customized backgrounds could be a small step towards that.
Just days after Microsoft Teams announced a whole host of new virtual backgrounds aimed at enhancing collaboration and productivity, including collections aimed at boosting mental health, Google Meet will be hoping its generative AI offering will be enough to capture user's attention.
In the end, it remains to be seen – do you want your workplace calls to be unique, special and customized – or keeping to some veneer of professionalism?
ChatGPT 4 is coming as early as next week and will likely go with a new and potentially dreadful feature: video.
Currently, ChatGPT and Microsoft’s updated Bing search engine are powered by ChatGPT 3.5 large language models, which allows them to respond to questions in a human-like way. But both AI implementations have had their fair share of problems so far, so what can we expect, or at least hope to see, with a new version on the horizon?
According to Microsoft Germany’s CTO, Andreas Braun (as reported by Neowin), the company “will introduce GPT 4 next week, where we will have multimodal models that will offer completely different possibilities – for example, videos.” Braun made the comments during an event titled ‘AI in Focus – Digital Kickoff’.
Essentially, AI is definitely not going away anytime soon. In its current state, we can interact with OpenAI's chatbot strictly through text, providing inputs and controls and getting conversational, mostly helpful, answers.
So the idea of having ChatGPT-powered chatbots, like the one in Bing, being able to reply in other mediums other than plain text is certainly exciting – but it also fills me with a bit of dread.
As I mentioned earlier, ChatGPT’s early days were marked with some strange and controversial responses that the chatbots gave to users. The one in Bing, for example, not only gave out incorrect information, but it then argued with the user who pointed out its mistakes, causing Microsoft to hastily intervene and limit the amount of responses it can provide in a single chat (and which Microsoft is only now slowly increasing again).
If we start seeing a similar streak of weirdness with videos, there could be even more concerning repercussions.
Ethics of AI
In a world where AI-generated ‘deepfake’ videos are an increasing concern for many people, especially those who unwittingly find themselves starring in those movies, the idea of ChatGPT dipping its toes into video creation is a bit worrying.
If people could ask ChatGPT to create a video starring a famous person, that celebrity would likely feel violated. While I’m sure many companies using ChatGPT 4, such as Microsoft, will try to limit or ban pornographic or violent requests, the fact that the ChatGPT code is easily available could mean more unscrupulous users could still abuse it.
There’s also the matter of copyright infringement. AI generated art has come under close scrutiny over where it is taking its samples from, and this will likely be the case with videos as well. Content creators, directors and streamers will likely take a dim view of their works being used in AI generated videos, especially if those videos are controversial or harmful.
AI, especially ChatGPT, which only launched a few months ago, is still in its infancy, and while its potential has yet to be fully realised, so too have the moral implications of what it can achieve. So, while Microsoft’s boasts about video coming soon to ChatGPT is impressive and exciting, the company also needs to be careful and make sure both users and original content creators are looked after.