On Thursday, March 16, Microsoft is planning to reveal more of its grand scheme for implementing AI chatbot ChatGPT’s features into yet more aspects of our lives – specifically, how the tech firm has big plans to “reinvent productivity with AI”.
Besides being utterly meaningly corporate marketing jargon, this notion of ‘reinventing productivity’ is concerning at best, especially since we don’t know what it actually entails yet. Speculation is rife that Microsoft plans to integrate ChatGPT into the Microsoft 365 (formerly Office) software suite, along with the Dynamics 365 suite for enterprise use.
This comes hot on the heels of Microsoft shoving the chatbot into almost everything it owns. Starting out with the integration of ChatGPT into Bing and following rapidly with AI-powered additions to Skype and the Windows 11 taskbar, Microsoft has been going hard when it comes to AI in its software.
We had already speculated about the ways in which ChatGPT could transform Microsoft’s consumer software suite, so it’s not like this is a huge surprise. However, I’m worried about the whole prospect; Microsoft is rushing into its AI implementation plan, and it’s going to cause more problems than it solves.
The AI arms race
Microsoft’s apparent desire to shoehorn AI features into yet more of its products is likely a response to competitor Salesforce’s own moves in partnering up with ChatGPT creator OpenAI to bring the chatbot to Slack (as well as Snapchat introducing its own AI chatbot) This sort of reactionary decisionmaking is rarely a wise move, especially when it involves AI.
We’re witnessing a real-time arms race to cram AI tech into every aspect of our lives, and I wouldn’t trust Microsoft (or any huge tech company, like Google or Meta) to be the harbingers of this chatbot renaissance. Right now, Microsoft is demonstrating a lack of caution when it comes to ChatGPT and AI in general, especially since it’s a space yet to see serious regulation from major governments.
I will admit that AI coming to the 365 suite is actually a much less horrible idea than, say, letting ChatGPT make video content. The ability to ask ChatGPT something simple like ‘add some animations to my PowerPoint presentation’ or ‘reformat this text document as a letter’ is both useful and relatively non-threatening – though the potential for Microsoft Word to simply write content for you is a bit concerning, especially for the academic space.
I’m not saying that ChatGPT being added to these tools is going to ruin our lives, but it has issues – and I’m definitely not convinced that Microsoft is taking all the right precautions here. This is a situation where caution will be rewarded; Google isn’t letting people get up close and personal with its new AI just yet, and Microsoft themselves had to limit the Bing chatbots replies after a whole load of weirdness from the AI. Charging ahead with more AI tools right now? Not a good look.
Twitter-based leaker @PhantomOfEarth pointed out @XenoPanther’s discovery of a bunch of strings tucked away under the hood in build 25314 referring to taskbar grouping.
👀 More signs of a feature for choosing taskbar app grouping behavior/showing labels (in simple terms, never combine is returning) in 25314, could we finally see it soon? New strings:’Options to group similar windows on taskbar”Show labels on taskbar pins’ https://t.co/JN4XKSkIdAMarch 9, 2023
What does this mean? Well, it’s a tantalizing hint that as previously rumored, Microsoft is going to bring back the ‘never combine’ option when it comes to grouping apps on the taskbar.
In other words, rather than having multiple instances of the same app automatically grouped together on the taskbar (stacked vertically), you can have each of them separated into individual entries (horizontally) on the bar.
Elsewhere in build 25314, there are some minor tweaks, the most significant of those being a change to File Explorer, namely the addition of Access Keys. These are single keystroke shortcuts labeled by a single letter in the context menu of File Explorer – simply hit the relevant key to swiftly execute the command in question.
Furthermore, those using Azure Active Directory will now see recommendations for files they might find useful or relevant at the top of File Explorer Home.
Analysis: It seems like Microsoft is finally listening on the taskbar
When cooking up Windows 11, Microsoft made some mystifying decisions with the interface, leaving out some core bits of functionality seen in Windows 10, most notably with the taskbar. The ability to never combine (stack up) running instances of the same app was one of those features that got dropped.
To see a glimmer of hope that it might be inbound for the future, then, is certainly welcome. Although we still question exactly why it has taken so long for Microsoft to look at implementing this. And we must remember, this is only tinkering in the background in early testing for now – eventually it’s possible nothing could come of it, though we’re trying not to entertain that possibility, frankly. The lack of this feature is a deal-breaker for us, personally, in Windows 11.
As for the other notable taskbar omissions Microsoft made with Windows 11, drag-and-drop support was returned to the bar not so long ago. And in the future, we may also see the resurrection of the ability to move the taskbar from the bottom of the screen to the sides or top. (Currently, it’s locked down at the foot of the screen for Windows 11 users, whereas those on Windows 10 can move it around, of course).
So, it seems that Microsoft is slowly rethinking and reversing course on its taskbar philosophy with Windows 11, and frankly, it’s about time. Especially given all the feedback and voices shouting about these bits of functionality being stripped away for no good reason – not that we can think of, anyway. And don’t give us any excuses about streamlining or simplifying the UI, these can be options in Settings that no one who’s bothered about this sort of thing ever has to look at.
In short, Microsoft, please keep going along this path of reversal, because, you know, we’d like to get back to a Windows 10 level of functionality with the taskbar, if that’s okay?
WhatsApp could soon be adding expiration dates to group chats so you no longer have to deal with so much clutter in your inbox.
If you open up your Whatsapp right now we expect you’ll find a backlog of defunct chats for group projects that have long been handed in, coordinating for events that happened years ago, and communicating with school friends you haven’t seen in a decade. Most of these you'd probably long forgotten about, with the unnecessary chats clogging up your smartphone’s storage with messages and images from chats you no longer need.
According to a leak, WhatsApp is set to get a feature that will help unclog your inbox called 'Expiring Groups' (via WaBetaInfo). If the feature is added you should find it on the group’s info page; using it you’ll then be able to set when you’ll be prompted to 'clean up' the group choosing either one day, one week, or a custom date. You could also remove the group’s expiration date.
Based on the leaked screenshot, each user would have to set their own expiration date for the group, and it looks like WhatsApp won’t automatically delete the group. Instead, it will seemingly remind you that it might be time to leave the group or delete it, but the decision will be yours.
We’d suggest taking this news with a pinch of salt, though. WaBetaInfo has noted that the Expiring Groups feature is still in development – so not only is it apparently not ready for a full release, it’ll likely be some time before the feature makes its way to the WhatsApp beta. As such there’s a chance we’ll never see the feature launch – the developers could decide to scrap it – or by the time Expiring Groups launches, it could function differently.
You don’t have to wait for this Expiring Groups feature to launch if you want to remove old WhatsApp groups. In your Inbox long press on the chat you want to remove and then tap the menu button in the corner (the three dots) that appears, then Exit Group. Alternatively, you can Archive the chat so that it’s no longer in your Inbox but you can return to the chat later if you choose to.
Some of the most popular Google Workspace apps are set to look rather different following the rollout of a new design scheme from the company.
Users of the likes of Google Docs, Sheets, and Slides will soon start seeing a refresh as the company's “Material You” design expands to its office software.
As well as cosmetic changes, the upgrade will also bring some new features for users, including a redesigned Google Docs toolbar and a streamlined design interface across the apps.
Google Workspace changes
“In the coming weeks, you’ll notice a new look and feel for Google Drive, Docs, Sheets, and Slides on the web,” a Google Workspace blog post announcing the news noted. “Following the release of Google Material Design 3, the refreshed user interface is purposefully designed to streamline core collaboration journeys across our products.”
“These key visual and interactive design changes will help you get your best work done faster by emphasizing the tools within our products used most frequently.”
Inspired by Google’s Material Design 3, the company says the refresh provides some of its most popular tools with a more modern look that will deliver a simpler, more streamlined UI that helps users work more efficiently.
Some of the more obvious changes users will spot include the new Google Docs toolbar, which is now a long “pill” shape that is not just thicker, but also stretches across your browser window. There are more precise options to make your text size bigger or smaller by a single point (rather than by 2x or more) and a number of new dropdown menus that group together similar functions into a single location, such as paragraph formatting.
The ubiquitous “share” button is also a softer, more rounded design, and in a lighter blue/green/yellow shade depending on which program you are using, and the button to start a Google Meet call directly from a document has been simplified from a multi-color option to a basic camera image.
Elsewhere, the status information (including last edit and version history) is now gathered in a single clock-face icon in the top-right hand corner, and there is an improved interface for setting rulers and gridlines.
The changes are rolling out now, and will be available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers, and users with personal Google accounts.
Microsoft has deployed a new version of its Bing chatbot (v96) featuring improvements to make the AI smarter in a couple of key areas – and a big change has been flagged up as imminent, too.
Mikhail Parakhin, who heads up the Advertising and Web Services division at Microsoft, shared this info on Twitter (via MS Power User).
OK, it took longer than we initially expected, but finally Bing Chat v96 is fully in production. Give it a try! Now, onto fully shipping the tri-toggle…February 28, 2023
So, what’s new with v96? Parakhin explains that users of the ChatGPT-powered Bing will now experience a ‘significant’ reduction in the number of times that the AI simply refuses to reply to a query.
There will also be “reduced instances of hallucination in answers” apparently, which is industry lingo meaning that the chatbot will produce fewer mistakes and inaccuracies when responding to users. In short, we should see less misinformation being imparted by the chatbot, and there have been some worrying instances of that occuring recently.
The other major news Parakhin delivers is that the so-called tri-toggle, known more formally as the Bing Chat Mode selector – featuring three settings to switch between different personalities of the AI Bing – is set to go live in the “next couple of days” we’re told.
Analysis: Long and winding road ahead
The ability to switch between a trio of personalities is the big change for the Bing chatbot, and to hear that it’s imminent is exciting stuff for those who have been engaging with the AI thus far.
As detailed previously, the trio of personalities available are labeled as Precise, Balanced, and Creative. The latter is set to provide a chattier experience, and Precise will offer a shorter, more typical ‘search result’ delivery, with Balanced being a middle road between the two. So, if you don’t like how the AI is responding to you, at least there will be choices to alter its behavior.
Various different versions of the Chat Mode selector have been tested, as you would imagine, and the final model has just been picked. This is now undergoing honing before release which should happen later this week as noted, but we’re guessing there’ll be plenty of further fine-tuning to be done post-release.
Certainly if the overall Bing AI experience has been anything to go by, as the whole project is, of course, still in its early stages, and Microsoft is chopping and changing things – sometimes in huge ways – seemingly without much caution.
The current tuning for v96 to ensure Bing doesn’t get confused and simply not reply will help make the AI a more pleasant virtual entity to interact with, and the same will hopefully be true for the ability to switch personalities.
At the very least, the Creative personality should inject some much-needed character back into the chatbot, which is what many folks want – because if the AI behaves pretty much like a search engine, then the project seems a bit dry and frankly in danger of being judged as pointless. After all, the entire drive of this initiative is to make Bing something different rather than just a traditional search experience.
It’s going to be a long road of tweaking for the Bing AI no doubt, and the next step after the personalities go live will likely be to lift that chat limit (which was imposed shortly after launch) to something a bit higher to allow for more prolonged conversations. If not the full-on rambles initially witnessed, the ones that got the chatbot into hot water for the oddities it produced…
ChatGPT has quickly become one of the most significant tech launches since the original Apple iPhone in 2007. The chatbot is now the fastest-growing consumer app in history, hitting 100 million users in only two months – but it's also a rapidly-changing AI shapeshifter, which can make it confusing and overwhelming.
That's why we've put together this regularly-updated explainer to answer all your burning ChatGPT questions. What exactly can you use it for? What does ChatGPT stand for? And when will it move to the next-gen GPT-4 model? We've answered all of these questions and more below. And no, ChatGPT wasn't willing to comment on all of them either.
In this guide, we'll mainly be covering OpenAI's own ChatGPT model, launched in November 2022. Since then, ChatGPT has sparked an AI arms race, with Microsoft using a form of the chatbot in its new Bing search engine and Microsoft Edge browser. Google has also quickly responded by announcing a chatbot, tentatively described as an “experimental conversational AI service”, called Google Bard.
These will be just the start of the ChatGPT rivals and offshoots, as OpenAI is offering an API (or application programming interface) for developers to build its skills into other programs. In fact, Snapchat has recently announced a chatbot 'called My AI' that runs on the latest version of OpenAI's tech.
For now, though, here are all of the ChatGPT basics explained – along with our thoughts on where the AI chatbot is heading in the near future.
What is ChatGPT?
ChatGPT is an AI chatbot that's built on a family of large language models (LLMs) that are collectively called GPT-3. These models can understand and generate human-like answers to text prompts, because they've been trained on huge amounts of data.
For example, ChatGPT's most recent GPT-3.5 model was trained on 570GB of text data from the internet, which OpenAI says included books, articles, websites, and even social media. Because it's been trained on hundreds of billions of words, ChatGPT can create responses that make it seem like, in its own words, “a friendly and intelligent robot”.
This ability to produce human-like, and frequently accurate, responses to a vast range of questions is why ChatGPT became the fastest-growing app of all time, reaching 100 million users in only two months. The fact that it can also generate essays, articles, and poetry has only added to its appeal (and controversy, in areas like education).
But early users have also revealed some of ChatGPT's limitations. OpenAI says that its responses “may be inaccurate, untruthful, and otherwise misleading at times”. OpenAI CEO Sam Altman also admitted in December 2022 that the AI chatbot is “incredibly limited” and that “it's a mistake to be relying on it for anything important right now”. But the world is currently having a ball exploring ChatGPT and, despite the arrival of a paid ChatGPT Plus version, you can still use it for free.
What does ChatGPT stand for?
ChatGPT stands for “Chat Generative Pre-trained Transformer”. Let's take a look at each of those words in turn.
The 'chat' naturally refers to the chatbot front-end that OpenAI has built for its GPT language model. The second and third words show that this model was created using 'generative pre-training', which means it's been trained on huge amounts of text data to predict the next word in a given sequence.
Lastly, there's the 'transformer' architecture, the type of neural network ChatGPT is based on. Interestingly, this transformer architecture was actually developed by Google researchers in 2017 and is particularly well-suited to natural language processing tasks, like answering questions or generating text.
ChatGPT was released as a “research preview” on November 30, 2022. A blog post casually introduced the AI chatbot to the world, with OpenAI stating that “we’ve trained a model called ChatGPT which interacts in a conversational way”.
The interface was, as it is now, a simple text box that allowed users to answer follow-up questions. OpenAI said that the dialogue format, which you can now see in the new Bing search engine, allows ChatGPT to “admit its mistakes, challenge incorrect premises, and reject inappropriate requests”.
ChatGPT is based on a language model from the GPT-3.5 series, which OpenAI says finished its training in early 2022. But OpenAI did also previously release earlier GPT models in limited form – its GPT-2 language model, for example, was announced in February 2019, but the company said it wouldn't release the fully-trained model “due to our concerns about malicious applications of the technology”.
OpenAI also released a larger and more capable model, called GPT-3, in June 2020. But it was the full arrival of ChatGPT in November 2022 that saw the technology burst into the mainstream.
How much does ChatGPT cost?
ChatGPT is still available to use for free, but now also has a paid tier. After growing rumors of a ChatGPT Professional tier, OpenAI said in February that it was introducing a “pilot subscription plan” called ChatGPT Plus in the US. A week later, it made the subscription tier available to the rest of the world.
ChatGPT Plus costs $ 20 p/month (around £17 / AU$ 30) and brings a few benefits over the free tier. It promises to give you full access to ChatGPT even during peak times, which is when you'll otherwise frequently see “ChatGPT is at capacity right now” messages during down times.
OpenAI says the ChatGPT Plus subscribers also get “faster response times”, which means you should get answers around three times quicker than the free version (although this is no slouch). And the final benefit is “priority access to new features and improvements”, like the experimental 'Turbo' mode that boosts response times even further.
It isn't clear how long OpenAI will keep its free ChatGPT tier, but the current signs are promising. The company says “we love our free users and will continue to offer free access to ChatGPT”. Right now, the subscription is apparently helping to support free access to ChatGPT. Whether that's something that continues long-term is another matter.
How does ChatGPT work?
ChatGPT has been created with one main objective – to predict the next word in a sentence, based on what's typically happened in the gigabytes of text data that it's been trained on.
Once you give ChatGPT a question or prompt, it passes through the AI model and the chatbot produces a response based on the information you've given and how that fits into its vast amount of training data. It's during this training that ChatGPT has learned what word, or sequence of words, typically follows the last one in a given context.
For a long deep dive into this process, we recommend setting aside a few hours to read this blog post from Stephen Wolfram (creator of the Wolfram Alpha search engine), which goes under the bonnet of 'large language models' like ChatGPT to take a peek at their inner workings.
But the short answer? ChatGPT works thanks to a combination of deep learning algorithms, a dash of natural language processing, and a generous dollop of generative pre-training, which all combine to help it produce disarmingly human-like responses to text questions. Even if all it's ultimately been trained to do is fill in the next word, based on its experience of being the world's most voracious reader.
What can you use ChatGPT for?
ChatGPT has been trained on a vast amount of text covering a huge range of subjects, so its possibilities are nearly endless. But in its early days, users have discovered several particularly useful ways to use the AI helper.
Other language-based tasks that ChatGPT enjoys are translations, helping you learn new languages (watch out, Duolingo), generating job descriptions, and creating meal plans. Just tell it the ingredients you have and the number of people you need to serve, and it'll rustle up some impressive ideas.
But ChatGPT is also equally talented at coding and productivity tasks. For the former, its ability to create code from natural speech makes it a powerful ally for both new and experienced coders who either aren't familiar with a particular language or want to troubleshoot existing code. Unfortunately, there is also the potential for it to be misused to create malicious emails and malware.
ChatGPT doesn't currently have an official app, but that doesn't mean that you can't use the AI tech on your smartphone. Microsoft released new Bing and Edge apps for Android and iOS that give you access to their new ChatGPT-powered modes – and they even support voice search.
The AI helper has landed on social media, too. Snapchat announced a new ChatGPT sidekick called 'My AI', which is designed to help you with everything from designing dinner recipes to writing haikus. It's based on OpenAI's latest GPT-3.5 model and is an “experimental feature” that's currently restricted to Snapchat Plus subscribers (which costs $ 3.99 / £3.99 / AU$ 5.99 a month).
OpenAI's CEO Sam Altman has confirmed that it's working on a successor to the GPT-3.5 language model used to create ChatGPT, and according to the New York Times this is GPT-4.
Despite the huge number of rumors swirling around GPT-4, there is very little confirmed information describing its potential powers or release date. Some early rumors suggested GPT-4 might even arrive in the first few months of 2023, but more recent quotes from Sam Altman suggest that could be optimistic.
For example, in an interview with StrictlyVC in February the OpenAI CEO said in response to a question about GPT-4 that “in general we are going to release technology much more slowly than people would like”.
He also added that “people are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.” That said, rumors from the likes of the New York Times have suggested that Microsoft's new Bing search engine is actually based on a version of GPT-4.
While GPT-4 is unlikely to bring anything as drastic as graphics or visuals to the text-only chatbot, it is expected to improve on the current ChatGPT's already impressive skills in areas like coding. We'll update this article as soon as we hear any more official news on the next-gen ChatGPT technology.
Google has announced that it's bringing end-to-end encryption to group chats in the Google Messages app. The security upgrade is heading to beta users first before being rolled out more widely.
End-to-end encryption means no one, not even Google, can read the content of messages. It's already supported in the Google Messages app for one-to-one chats, but now (via The Verge) it's going to be added to group conversations as well.
“End-to-end encryption is starting to roll out for group chats and will be available to some users in the open beta program over the coming weeks,” Google says. “This shouldn’t even be a thought – just an expectation and something anyone texting should not have to worry about.”
From SMS to RCS
In the same announcement blog post, Google revealed that the ability to quickly react to a message with any emoji is coming to Google Messages soon as well. At the moment, only a selection of emojis can be used as reactions.
Alongside a mention of these new features, Google also continued to push hard for RCS (Rich Communication Services) to become the new standard for everyone – the technology, an upgrade on SMS, is now widely available but has yet to be adopted by Apple on its iPhones.
Google's post also acknowledged the 30th anniversary of the SMS, a milestone which emphasizes how old the technology is as well as how overdue we now are for a standard that can fully replace it.
Analysis: SMS should really be history
The arrival of SMS three decades ago helped to transform the way that we communicate with each other – even if the messages were limited in terms of characters, and many phones could only store a limited number of texts at any one time.
Now, apps like WhatsApp and Slack have taken us far, far beyond those limitations. Messages can be much longer and include photos, videos or audio, and we can even tell when recipients have opened up the messages we send them.
It's benefits like these that make RCS a worthwhile upgrade, improving the security of messages and making features such as group chats much better. Google didn't create the standard, but it is heavily promoting it.
However, whenever an iPhone user texts an Android user, SMS is still the protocol used. Google wants that to change, but it's unlikely that Apple ever will – Apple knows that iMessage is one of the key reasons that people stick with iPhones.
It seems that Apple forgot about the A13 Bionic processor that powers its own Studio Display, as a recent firmware update caused the monitor to malfunction.
On April 8, Apple stopped signing iOS update 15.4 after it pushed down update 15.4.1 on March 30. Normally when an update stops signing, it’s not available anymore and can no longer be installed. But since Studio Display uses 15.4 and cannot install 15.4.1, this meant that over the weekend users were out of luck.
According to MacWorld, anyone with issues using the monitor was met with a message stating: “Apple Studio Display firmware update could not be completed. Try again in an hour. If the problem persists, contact an authorized Apple service provider.”
As of April 10, Apple has fixed the issue and users have reported that the firmware update was installed without a hitch. However, the tech giant will most likely need to overhaul the signing and un-signing of iOS updates since multiple products require various versions to operate.
This isn’t the first time the Apple Studio Display needed a fix. Soon after its launch, the monitor received an update in order to fix the low quality of its webcam, as reported by multiple outlets such as TechCrunch and The Wall Street Journal.
Analysis: the perils of an ecosystem
When you have a whole lot of products that are supposed to work with each other seamlessly, but they aren't running on the same system, problems are bound to pop up.
While Apple is known for a very tight product catalog that keeps the number of models currently being sold to a fairly lean lineup, Apple has been expanding its offerings in recent years.
Whether it's the Apple HomePod, the recent Apple AirPods 3, or any number of its MacBook and Mac products, Apple is having to juggle a lot more discrete systems that are supposed to work without the user even really thinking about it. It's kind of Apple's thing, so while it's kind of funny to think that Apple accidentally nerfed its own high-end workstation monitor by mistake, it's also symptomatic of a growing number of interlocking products where it becomes harder to predict what any single change to the system will have.
While Apple typically runs a tight ship, we wouldn't be surprised if we saw more of this kind of thing in the future.