Microsoft tells us that its Bing search engine has hit new highs, now boasting 100 million daily active users.
The news came via a post on the Microsoft Bing blog site observing the difference that the ChatGPT-powered AI has made to Bing traffic in its first month of existence, pushing numbers up over 100 million for the first time ever.
The company notes that there are over a million users of the new Bing AI (in preview), and that this helped boost the figures of the search engine to realize the fresh milestone.
Microsoft then goes on to say that a third of the “millions of active users” of the Bing AI are new to Bing, which shows that the chatbot is driving folks to the search engine, although there seems to be an element of confusion here.
Is it a ‘million plus’ users of the AI, or ‘millions’? While technically those aren’t contradictory statements, we don’t see why Microsoft didn’t just call it ‘millions’ in both cases as that clearly sounds like more. Like a couple of million, or a few, even? Insert shoulder shrug here.
At any rate, Microsoft also acknowledged the obvious truth that Bing is still a long old way off the pace compared to Google: “This [100 million] is a surprisingly notable figure, and yet we are fully aware we remain a small, low, single digit share player. That said, it feels good to be at the dance!”
Analysis: Can this growth be sustained?
Let’s face it: Bing AI isn’t just a chatbot. It’s a vehicle to help Bing challenge Google, and one that Microsoft hopes will be swiftly moving up through the gears to gain momentum.
This isn’t just about pushing for market share with the Bing search engine either, it’s also a line of attack for the Edge browser, as we’ve already seen with the taskbar implementation of the Bing AI in Windows 11: an icon that linked through to the Bing page, opening it up in Edge.
(Note that after the disappointment around this, and the way Microsoft made it seem like the AI was integrated into the taskbar, rather than just being a link, the Bing icon has vanished from the search box for now, though we’re told it will be returning periodically).
Anyway, we can see that Microsoft’s plan is working thus far, with the Bing AI preview successfully recruiting regular users to add to the ranks of Bing searchers – and a good dollop of them – but will this state of affairs last?
(Image credit: Microsoft)
We’re doubtful. You see, the Bing chatbot is all shiny, new, and still very much an object of curiosity right now. It had a serious pull to begin with, as you’d expect from new tech, and that interest has been carried through with measures like the recent introduction of a trio of personalities to experiment with, as well as various limits Microsoft had previously imposed on chats being lifted.
And doubtless there’s still entertainment to be had prodding the AI, trying to engage with it using different angles – humor inevitably being one of them – and generally messing with the chatbot. That won’t last, though.
Don’t get us wrong, there will be serious users of the Bing bot out there, of course, but we’d imagine a sizeable chunk of the early attraction comes from the curious or mischievous.
And in that respect, initial figures are not really a yardstick of how much impact the ‘new Bing’ as Microsoft calls it will make. If growth is sustained, and the AI is meaningfully honed and improved over the next few months, we can come back and talk about a new wave of adoption for Bing.
Until then, we remain skeptical, and our overall feeling is that Microsoft has opened the doors too early on this one. We’re not sure the AI is going to be well tuned enough to seriously impress in the way it should for some time yet, but it’s easy to see why Microsoft was keen to launch. It needs all the weaponry it can muster in the battle against Google (and Chrome for that matter), and the latter company is forging ahead with its own AI tech (Bard).
There’s no shortage of challenges you’re going to face once you decide to create a website. Whether you want to boost your business with a beautiful site or kick off that blog you’ve always wanted to create – one is sure. You’re going to need a solid web hosting service – it’ll be your site’s forever home, hopefully.
The good news is that there’s a perfect web hosting solution somewhere out there, regardless of your level of technical know-how and available budget. However, finding the right one for you and your business can take some time and careful consideration.
It’s not all about the cost, you know? A solution fit for a fabulous WordPress blog probably won’t fulfil the needs of an online store with an ever-growing number of visitors.
Before being able to pick out the perfect solution for your online project, you should make sure that it provides the right amount of resources, as well as an ability to scale up/down. A seemingly small thing like this can set your site for success right at the start.
So, to make things simpler for you, we’ll go through the three most common types of web hosting and tell you everything you should know before opting for any of them.
The three common types of web hosting
Whether you’re starting out or wish to switch your web hosting solution for a superior one, you’ll want to catch wind of the most common web hosting types and tackle their meaning. After that, you can be sure you’ve made the best decision for the future of your website.
Most people start their journey into cyberspace with shared hosting. It’s not only simple to start with but also considerably cheaper than other solutions out there. Consequently, it’s also the least powerful one. This is because with this type of solution a single server is shared among several users. Thereby, the resources are also shared – storage space, bandwidth, CPUs, and RAM included.
Once your site begins to grow and outgrow your shared hosting solution, you’ll probably want to upgrade to a virtual private server (VPS) hosting solution. Without breaking the bank, it will let you get rid of the primary drawback of shared hosting – shared resources. So, with a VPS solution, you’ll still share a physical server with other sites, but you will get a set of resources that are dedicated to your site, and your site only.
If you want to step up your game from a VPS solution and don't mind paying a pretty penny for it – you should consider dedicatedserver hosting. As the name suggests, you’ll get your dedicated physical server with your dedicated resources and you won’t have to share them with anyone else.
It’s no secret that shared hosting is popular for its cost-effectiveness and ease of use. It’s the cheapest out of these three options, so if you can spare a mere $ 5 per month – you can afford it. Also, shared hosting is exceedingly easy to use, so even if you’ve never made or managed a site before, you’ll catch up with everything in no time. So, there’s no need for technical know-how – the technical team will walk you through every step of the journey.
Acting as a bridge between shared hosting and dedicated server hosting, VPS offers powerful performance, high uptime, superb long-term scalability, enhanced security, customizability, and control over your server space. However, the ease of use will vary depending on whether you’re using a managed or unmanaged service.
While dedicated server hosting will cost you a big buck, it can get you everything a VPS provides plus complete control over your solution. With full root access, you can perform direct server customizations without any restrictions – alter your hardware specifications, add advanced security tools, install applications across your server, and much more. Also, a dedicated solution comes with a dedicated, round-the-clock support staff.
(Image credit: Shutterstock)
Who should use each type of web hosting?
Being the simplest solution out of the three, shared hosting is the best choice for small sites and blogs that don’t get too many visitors. It’s also a solid solution for young entrepreneurs that lack a big budget and technical know-how but don’t mind starting small.
A VPS solution offers a fine balance between resources and budget, and it’s aimed at those who have outgrown shared hosting. It’s perfect for those running high-traffic sites for small to mid-sized businesses including ecommerce platforms, SaaS providers, software engineers, and so forth.
Dedicated servers are “state-of-the-art” web hosting solutions geared towards mid-sized to big businesses that employ over 500 people that are processing massive amounts of data every day. For instance, if you’re running a booming ecommerce store with hundreds of transactions per hour, you’ll want to consider this type of solution.
The benefits of each type of web hosting
The three primary advantages of choosing a shared hosting solution are its budget-friendliness (the pricing starts at $ 5 per month), a beginner-friendly approach (simple setup, built-in control panel, and site-building tools), and solid customer support (expect to get live chat support and access to well-supplied knowledgebase).
In comparison with shared hosting, VPS will get you more powerful performance, higher reliability, and the ability to scale up/down your server with ease. It also gives you more control over your server and a superb level of customizability.
The main benefit of utilizing a dedicated server solution is having dedicated resources that can keep up and promote the growth of your business. Also, your business site/sites should benefit from increased speed, improved SEO, and superior security.
While fully managed dedicated servers are pretty popular with large enterprises, you can also opt for a partially managed or unmanaged server and save up some money.
Things to avoid when choosing a web hosting service
Since we’ve already shared our tips and trick on how to choose a web hosting service, now we’re going to uncover what mistakes you should avoid making when searching for your solution.
If you want to go with shared hosting, don’t settle for a free hosting service just to save up some money at the start. It will cripple your site with seriously slow speed, unreliable uptime, and non-existent customer support. Before you know it, most of your potential customers will lose trust in your brand and your business will go bust.
Likewise, don’t purchase a shared or VPS hosting solution before trying it out with a free trial – if one is provided. If not, check whether there is a money-back guarantee. Plus, don’t forget to see what the small print says – this is true for all three types of hosting.
If you thinking about purchasing a self-managed VPS solution even if you aren’t particularly tech-savvy – don’t do it. It’s harder than you think and it’ll take plenty of time before you get the hang of it.
Also, don’t fail to check the company behind the solution including their track record and history of security incidents – if there are any. Take some time to read out reviews about your potential web hosting provider, professional reviews and customer testimonials alike. And if a provider has a bad track record, avoid it like the plague.
This is particularly important when picking out a dedicated server solution as with this type of hosting one should never make compromises in terms of security.
On the end note, don’t forget to consider the needs of your online project before picking out a web hosting package for it. What type of site do you wish to create? Will you be creating a single site or more of them? How do you plan to build your site? What amount of traffic do you expect to receive each month? What additional features do you want to receive? And how much money are you willing to give up each month?
Once you’ve answered all these questions, you’ll be a couple of steps closer to choosing the best web hosting solution for your business.
In an effort to further secure open source software, GitHub has announced that the GitHub Advisory Database is now open to community contributions.
While the company has its own teams of security researchers that carefully review all changes and help keep security advisories up to date, community members often have additional insights and intelligence on CVEs but lack a place to share this knowledge.
This is why GitHub is publishing the full contents of its Advisory Database to a new public repository to make it easier for the community to leverage this data. At the same time, the company has built a new user interface for security researchers, academics and enthusiasts to make contributions.
All of the data in the GitHub Advisory Database is licensed under a Creative Commons license and has been since the database was first created to ensure that it remains free and usable by the community.
Contributing to a security advisory
In order to provide a community contribution to a security advisory, GitHub users first need to navigate to the advisory they wish to contribute to and submit their research through the “suggest improvements for this vulnerability” workflow. Here they can suggest changes or provide more context on packages, affected versions, impacted ecosystems and more.
The form will then walk users through opening a pull request that details their suggested changes. Once this done, security researchers from the GitHub Security Lab as well as the maintainer of the project who filed the CVE will be able to review the request. Contributors will also get public credit on their GitHub profile once their contribution has been merged.
In an attempt to further interoperability, advisories in the GitHub Advisory Database repository use the Open Source Vulnerabilities (OSV) format. Software engineer for Google's Open Source Security Team, Oliver Chang provided further details on the OSV format in a blog post, saying:
“In order for vulnerability management in open source to scale, security advisories need to be broadly accessible and easily contributed to by all. OSV provides that capability.”
We'll likely more on this change to the GitHub Advisory Database once security researchers, academics and enthusiasts begin making their own contributions to the company's database.
Tony Lauro, director of security technology and strategy at Akamai, discusses how to disrupt account takeovers in the exploitation phase of an attack. Threatpost
Rumors of the Sony WH-1000XM4 – the successors to the best headphones we've ever tested – have been ramping up in recent months, coming to a head when a Walmart listing that appears to describe all the specs of the new headphones was leaked.
While the existence of the XM4s is yet to be confirmed by Sony, the Walmart listing revealed the kind of changes (or lack thereof) we can expect over the Sony WH-1000XM3.
Now, normally we would expect a brand to make some significant changes when bringing out a successor to its last pair of headphones. In this instance, however, we’re relieved that Sony hasn’t tinkered too much with its class-leading noise-cancelling headphones.
Instead, it looks like the company is making some very considered tweaks to the XM3, which could genuinely improve the user experience without detracting from a winning design. As they say, ‘if it ain’t broke, don’t fix it’.
What’s new with the Sony WH-1000XM4?
According to the leaked listing, one of these tweaks is the ability to connect more than one audio source to the headphones at once; multipoint pairing is something users of the XM3s have been calling out for, and it will allow you to pair the WH-1000XM4s with your laptop and your smartphone at the same time.
There should also be a slight improvement to the sound quality. A feature known as “Edge-AI” can restore the detail lost in highly compressed audio files by upscaling them in real time, which Sony says delivers sound “as close as possible to that of a wired connection.”
Wireless headphones have long been considered inferior to their wired counterparts in terms of audio fidelity, so this tweak will likely appease audiophiles who haven’t yet committed to cutting the cord.
Connectivity in general should be improved too, as Sony makes the leap from Bluetooth 4.2 to Bluetooth 5, which brings faster pairing times, connectivity over longer distances, and stronger pairing in high traffic areas.
The Sony WH-1000XM3.
Anyone who has used the Sony WH-1000XM3s to make phone calls should notice an improvement in the sound quality as well, with a feature called Precise Voice Pickup that uses the headphones’ five microphones and advanced audio signal processing to make your voice sound clearer.
The noise cancellation that made the Sony WH-1000XM3s so popular is also due an upgrade. According to the leaked listing, a feature called Adaptive Sound Control will “learn to recognize locations you frequently visit, such as your workplace or your favorite cafe.”
“In addition it automatically detects what you’re up to – for example, walking, waiting, or traveling – and then adjusts ambient sound settings to best suit the situation,” says Sony. This is a feature that’s already been brought to the XM3s via a firmware update, so we've had a bit of a preview already.
These are all smart tweaks to already-great features. So what’s staying the same with the Sony WH-1000XM4?
What’s staying the same?
Aside from these little tweaks and upgrades, the new XM4s seem to be very similar to their predecessors.
It looks like there won’t be any material changes to the design of the Sony WH-1000XM4s, which we think is a great thing. We loved how comfortable the XM3s felt, with big padded earcups and a soft headband.
They also looked great, with a sleek, minimalist build that appeals to a wide range of people, and we liked the touchpad controls – another feature that will be making a return.
The sound quality shouldn’t change substantially either, aside from that AI upscaling feature that will help to curb the data loss from highly compressed files. Judging from the leaked listing, the XM4s will use the same 40mm drivers as their predecessors and support for Sony’s LDAC transmission technology – and as the XM3s are among the best-sounding headphones on the planet, we’re happy to see that the audio profile hasn’t been tweaked too much.
Some may be disappointed to find that there’s no improvement to battery life – but with 30 hours of juice, the Sony WH-1000XM3 weren’t exactly short-lived. Plus, with a return of USB-C fast charging, the XM4s shouldn’t take too long to top up.
A considered approach
Sony has a history of making careful tweaks to its products with each upgrade, and it’s something we’ve seen with the brand’s noise-cancelling 1000X range before.
It’s a great way of instilling a sense of trust in the products, and it makes us feel confident that each new upgrade will bring genuinely useful updates, rather than skin-deep design changes that don’t really improve the experience of using the headphones.
Sony wouldn’t be able to be subtle with its upgrades to the 1000X series if the original product wasn’t so good – and in a market where every company is trying to outdo one and other with headline-grabbing features like gesture controls and built-in AI (like the TicPods Pro 2), it’s a risky move to let the sound, feel, and look of the headphones speak for itself. That's especially true with the first-ever Apple over-ear headphones looking like they're going to launch in a matter of weeks and shake up the headphones market.
Trends (or gimmicks, if you prefer) like virtual 3D audio, bone conduction, and crazy form factors (see: the Bose Frames) may come and go – but we don’t think there will ever be a time when people won’t want a great-sounding pair of noise-cancelling headphones that do their job with minimal fuss.
Hopefully, that’s exactly what the Sony WH-1000XM4 will do when they’re finally released – and with this recent leak, it’s only a matter of time before we can get our hands on them and find out for ourselves.
Can't wait until then? Check out the best Sony WH-1000XM3 deals we've found today:
In a blog post that came seemingly out of nowhere, Sony finally revealed the new DualSense PS5 controller. It marks a radical departure from the DualShock 4, but the redesigned pad will share one thing in common with its predecessor – developers will continue to ignore almost all of its unique features.
And that’s a shame, as the DualSense is stuffed full of exciting and potentially game-changing technology. Sony wants to tingle your fingertips and massage your palms in a variety of interesting ways using haptic feedback and adaptive triggers – and I’m all for it.
We’ve seen the tech used effectively in VR controllers, but if you’re new to haptic feedback it basically means you’ll feel more of what you see on screen – the sludginess as you drive a car through mud or the tension of pulling back a bow string as you shoot an arrow, for example.
The problem is – and I hate to admit this – that these features will largely be ignored by everyone but Sony’s first-party studios. History has shown us time and time again that even if you design a console entirely around a distinctive input device (hello, Nintendo Wii), third-party developers will still find a way to ignore 95% of a controller’s special qualities.
Let’s take a look at the DualShock 4 as our primary suspect. It’s got a lovely light bar which can change color to reflect what’s happening in a game, such as flashing white if you’re using a torch, or turning red if your health is low. How many games use it in this way, though? The answer is: barely any.
Next up, the DualShock 4 touch bar. If you ever needed a more concrete example of developer apathy in full effect, it’s that battery-draining touch bar. We saw Killzone: Shadowfall, a PS4 launch title, use the touch bar in some interesting ways – as did Infamous: Second Son. But how many other games can you name that transform the experience in any meaningful way using this feature? Probably no more than a handful, because basically every game just uses it as an oversized map button. Brilliant.
What about the PS4 accelerometer? A feature that’s been around since the SixAxis controller, which launched with the PlayStation 3. When did you last play a video game that used the accelerometer for something other than a silly gimmick? Yeah, didn’t think so.
Features schmeatures
But hold on. Maybe it’s because those features were rather superfluous. I mean, come on, a flashing light that you can’t even see most of the time? Who cares! Members of the court, may I present to you exhibit B: HD Rumble on Nintendo Switch.
The masters of cramming quirky technology down gamers’ throats, Nintendo always tries to introduce some bizarre new input system into their consoles. With Nintendo Switch it was no different. We were promised the sensation of feeling ice cubes in a controller – because of course we were. Despite the technology genuinely wowing in games like 1-2-Switch, it’s basically been ignored by even Nintendo themselves, and hasn’t come close to reaching the potential we were promised.
Still in denial? Okay, let’s wrap this up with one more sorry example. You might not know this, but the Xbox One controller has impulse triggers. And they’re freaking awesome and never, ever get used.
Do yourself a favor and play any of the Forza Motorsport games on Xbox One and you’ll experience a fingertip-defining moment that will make every other racing game seem a little sad in comparison. The triggers rumble and respond according to where your tyres are on the track, so you can physically feel the sensation of a wheel locking up, moving over gravel and responding to torque. It’s so damn good, but clearly not a priority for any developers.
One feature fits all
So why does this worrying trend constantly happen? Truth be told, it all comes down to time and money. Video games are extremely expensive to make, and require a lot of resources to do so. There’s no monetary benefit to developers spending the extra time to code for features that are specific for one console. Occasionally it can happen, but it’s an anomaly.
The odds are stacked against the DualSense controller, then. There’s no doubt that we’ll see some truly awe-inspiring moments from Sony’s first-party studios (firing Aloy’s bow in Horizon: Zero Dawn 2 is a given for the adaptive triggers), but try not to feel too disappointed if half the time these features come as a pleasant surprise, rather than a new standard moving forward.
While it hasn't been officially announced by Nintendo, we've been hearing plenty of rumors that suggest the company will release a third variant of the Switch this year. However, unlike the Switch Lite – which was very much focused on expanding the market at the lower end of the spectrum thanks to its more affordable price – the mooted "Switch Pro" will improve on the base console in new and meaningful ways, offering a more premium experience.
We're sure Switch owners have plenty of hopes and dreams for an upgraded Switch, but what about the people who will create software for this enhanced system? What new features would they like to see which would make their jobs easier, or allow them to take their titles to the next level?
We spoke to a bunch of Nintendo Switch developers to ask them exactly what they'd like to see in the rumored Switch Pro.
More powerful hardware
When it comes to the most requested feature from a development standpoint, "more power" is perhaps the most obvious option.
"I’d love to see a model that has a 1080p screen and the necessary processing power to run Switch docked performance in portable mode," says Thomas Kern of FDG Entertainment, the company responsible for bringing the likes of Oceanhorn and Monster Boy to Nintendo's console.
"It would also be good to see improved hardware to boost framerate just enough to keep existing Switch titles, such as Witcher 3, running at 30fps – or even 60fps – without frame drops. I think technically that’s feasible."
Joel Kinnunen, vice president of Trine studio Frozenbyte, has similar hopes. "Devs always want 'bigger, faster, better', so a beefier CPU and GPU would be nice."
“Devs always want ‘bigger, faster, better’, so a beefier CPU and GPU would be nice.”
Joel Kinnunen – Frozenbyte
Andres Bordeu, founder and game designer at Rock of Ages studio ACE Team, would also see increased power as the biggest benefit of a new Switch console.
“We probably differ from many independent developers since our projects, while still indie in nature, also aim to deliver incredible visuals powered by the latest tech and we invest a lot of time in research and development. In the indie community, we consider ourselves power users of Unreal Engine 4, which is used to build many Switch games, so a more capable GPU is something that definitely enables studios like ours to bring their creations to Nintendo’s platform.”
Philip Barclay of The Messenger developer Sabotage concurs. “As developers and huge fans of the Nintendo Switch console, one of the things that would be great for a 'Pro' version would be to support additional hardware rendering techniques for larger resolutions. If the Pro version ups the GPU, we could start to see even more amazing content in Switch games.”
Omar Cornut, Technical Director of Wonderboy: The Dragon's Trap developer Lizardcube, is more cautious and warns against hoping for more powerful hardware. "I have to say I love my Switch and I wouldn't want to change it too much; it's a perfect fit for the games we are making. More powerful hardware is convenient, but it also creates a tendency to drive the average game budget higher in order to be competitive, and this has knock-on effects on developers' ability to experiment.
"That said, technical progress is unstoppable; as a player, I wish for the extra power to allow for more Switch games to hit steadier and higher frame-rates across the entire lifetime of the console. A few more gigabytes of RAM and CPU cores would also facilitate porting of cross-platform projects."
Better screen
The 720p display on the Switch is hardly what you'd call cutting edge, so it should come as no surprise to learn that developers are keen to see that improve as well – although reports that suggest it could come with a 4K panel are frowned upon; Kern doesn't expect to see 4K on the new system himself, saying: "I don’t expect anything 4K, and I personally wouldn’t want 4K on Switch."
Cornut feels that boosting the Switch's resolution could result in an awkward balancing act. "When higher resolutions are available, the tendency is to sacrifice frame-rate. I would much rather have a console where most games are 1080p in stable 60 FPS rather than added support for 4K when docked, which would lead us down the line to more games aiming at 20-30 FPS."
Improved controls
More power under the hood and an improved screen seem to be obvious picks, but some developers want to see other elements of the Switch hardware get the upgrade treatment.
"As the developers of a racing game, we'd be really happy to see support for analogue triggers on the Switch's Joy-Con," says Edwin Smith of Feral Interactive, which ported GRID to the Switch with impressive results.
Cyrille Lagarigue, of Streets of Rage 4 developer Guard Crush Games, would also like to see the control setup expand with the Switch Pro.
"Personally, I'd like Nintendo to take advantage of the ingenious way the Joy-Con slide on the side of the Switch to propose more Joy-Con variants, for bigger hands, or maybe a left Joy-Con with a D-Pad and no joystick for 2D games! Having a Switch Pro would be a great opportunity to add this kind of devices; Pro means more choice!"
Faster internal storage
As we know from the hype surrounding the PS5 and Xbox Series X, the topic of memory speed is going to be a key one in the next-gen war – and Lizardcube's Omar Cornut would love to see some kind of improvement in this area for Switch, too.
"I hope for the internal storage to become a little faster as well as maybe raising the minimum specs of supported SD cards. We have to be considerate of loading data both from internal storage or from a variety of SD – some fast, some slow – and aiming for lowest common denominator can create lots of constraints on game design; for games with large streamed worlds, for example."
Faster RAM would potentially allow for more immersive titles on Switch Pro, which would allow it to maintain some degree of parity with Sony and Microsoft's upcoming systems.
Wireless audio
The topic of wireless audio also cropped up when we spoke to Switch developers, with many citing the lack of Bluetooth audio support as being a real negative to the current console. The console lacks a microphone, too, which means that Switch players are missing out when it comes to online multiplayer.
"I’d like to see an aptX low latency Bluetooth chip implemented that supports Bluetooth headphones," says Kern.
Dotemu's Fabien Borel – who is currently hard at work on Windjammers 2 – couldn't agree more, and adds another wish for the Switch Pro. "I think everybody will appreciate the possibility of support of Bluetooth devices such as headphones – and having some kind of achievement system, without it being mandatory for game companies, which is awkward!"
We'll leave the final word for Jérôme Fait of Young Souls developer 1P2P:
"We would be happy if the new one brings better specs, a sharper and brighter screen and maybe better Joy-Con with an official cross D-pad; a 5G connexion or better WiFi and Netflix, and if it could print money [laughs] – but I think that the Switch is perfect as it is."