Windows 11 has never been so popular – but is a fresh surge of installations coming from a place of love or mere tolerance?

Windows 11 is creeping up on its three-year anniversary since launch, and the OS has apparently hit an all-time high for users – almost 30% of all Windows PCs now run Windows 11, at least according to one analytics firm.

That may not seem like a lot – frankly, it isn’t – but it’s at least a marked improvement in recent times, where Windows 11’s adoption has actually slightly dropped, and this is certainly a positive sign compared to the cold reception that the operating system initially received.

Neowin flagged that Statcounter’s most recent monthly report shows Windows 11 at 29.7% of market share, with Windows 10 still currently enjoying a large majority of 66.1%. 

Normally, when a new operating system drops, it’s widely adopted. Still, if we’re celebrating a high of 30% nearly three years on from release, that’s obviously not a great indication that Windows 11 is being welcomed with open arms – despite all its extra perks and AI features, which are continuously being added.

That begs the question: Why are so many people reluctant to move to Windows 11? For starters, the more demanding system requirements that rule out older CPUs and machines without TPM are a hard barrier for adoption when it comes to some PCs.

Windows 11 laptop showing Copilot

(Image credit: Microsoft)

Furthermore, since its launch, Windows 11 has suffered more than its fair share of poor updates and buggy behavior. Plus, the OS is slowly turning into a conduit for ads that you can’t escape in some cases. Also, there’s just not a lot of difference between Windows 10 and Windows 11 for people who aren’t really that fussed about AI or Copilot (and Copilot is in Windows 10 anyway, even if all of Microsoft’s various AI features aren’t). 

Could this small victory for Windows 11 – which represents a monthly uptick of just over 2% in Statcounter’s figures – simply be the result of people buying new machines? You’d be hard-pressed to find a new Windows desktop PC or laptop that isn’t running Windows 11, and downgrading your system is just not worth the effort for many (or may not even be possible). Especially given that Windows 10 isn’t far off its End of Life anyway (that rolls around in October 2025).

It might be the case that we’ll have to wait until Windows 12 eventually debuts and hope that it’s a big enough improvement to get Windows 10 users to jump ship and skip Windows 11 – although, again, system requirements are likely to prove an insurmountable hurdle for some older PCs.

You might also like…

TechRadar – All the latest technology news

Read More

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them “apps”, and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach. 

He likened GPTs to “bookmarking a prompt” within the GPT sphere. MindStudio, on  the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers. 

Watch and learn

MindStudio

Choose your template. (Image credit: MindStudio)

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

MindStudio

Connect the boxes (Image credit: MindStudio)
Image 2 of 2

MindStudio

And then edit their contents (Image credit: MindStudio)

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

MindStudio

Options include setting your model’s ‘temperature’ (Image credit: MindStudio)

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $ 23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $ 99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

MindStudio

Look, may, I made an AI app. (Image credit: MindStudio)
Image 2 of 2

MindStudio

It’s smarter than I am. (Image credit: MindStudio)

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

You might also like

TechRadar – All the latest technology news

Read More

I review VR headsets for a living, and I’ve never seen a better Oculus Quest 2 deal

Amazon is offering a fantastic Oculus Quest 2 deal that not only scores you the impressive VR headset for $ 51 off, but you’ll also get a $ 50 gift card. It’s one of the best Black Friday deals I’ve seen.

Right now the Meta’s Quest 2 (128GB) model is down to $ 249 at Amazon – instead of its MSRP of £299. But if you act fast the holiday bundle will score you the discount and a free $ 50 Amazon voucher; effectively, this will save you $ 100 on the popular VR headset which we gave four–and–a–half stars in our Oculus Quest 2 review

I say you should act fast, because an identical deal was available in the UK for a few days – but it has now sold out. If history repeats itself in the US you don’t have long left to nab yourself one of the best Oculus Quest 2 Black Friday deals this year. 

I've been writing about VR for years and I haven't seen a better deal; so there's no point waiting for something better to come around this Black Friday if you're after a VR headset.

Get the best ever Oculus Quest 2 deal here:

Meta Quest 2 + Amazon Gift Card: was $ 349.99 now $ 249.00 at Amazon
Right now you can save $ 51 on the Meta Quest 2 (128GB) and get a free $ 50 Amazon gift card as well as part of this holiday bundle. I’ve never seen a better Meta Quest 2 deal, and I expect this may sell out before Black Friday, so act fast.View Deal

The only VR headset deal I think you should consider instead of this Oculus Quest 2 offer is the Meta Quest 3 deal that's available everywhere. That is you get the Meta Quest 3 for $ 499 and a free copy of Asgard's Wrath 2.  Alongside Amazon, you can find the same deal at  WalmartBest Buy, and Target among others.

While this isn't the best deal (the headset is full price) I think the Meta Quest 3 is a massive step up over the Quest 2; that's why I awarded it five stars in our Meta Quest 3 review. Yes, it's pricier, but it's worth the extra cost if you can afford it.

If you are on a tight budget then Meta's Oculus Quest 2 is still fine, and the above deal is a fantastic offer to take advantage of. But if you can afford to splash out on a Meta Quest 3 then I'd strongly suggest doing so.

For more on this topic, check out my guide to whether you should buy an Oculus Quest 2 or Meta Quest 3 this Black Friday.

More Black Friday deals

TechRadar – All the latest technology news

Read More

ChatGPT and other AI chatbots will never stop making stuff up, experts warn

OpenAI ChatGPT, Google Bard, and Microsoft Bing AI are incredibly popular for their ability to generate a large volume of text quickly and can be convincingly human, but AI “hallucination”, also known as making stuff up, is a major problem with these chatbots. Unfortunately, experts warn, this will probably always be the case.

A new report from the Associated Press highlights that the problem with Large Language Model (LLM) confabulation might not be as easily fixed as many tech founders and AI proponents claim, at least according to University of Washington (UW) professor Emily Bender, a linguistics professor at UW's Computational Linguistics Laboratory.

“This isn’t fixable,” Bender said. “It’s inherent in the mismatch between the technology and the proposed use cases.”

In some instances, the making-stuff-up problem is actually a benefit, according to Jasper AI president, Shane Orlick.

“Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas—how Jasper created takes on stories or angles that they would have never thought of themselves.”

Similarly, AI hallucinations are a huge draw for AI image generation, where models like Dall-E and Midjourney can produce striking images as a result. 

For text generation though, the problem of hallucinations remains a real issue, especially when it comes to news reporting where accuracy is vital.

“[LLMs] are designed to make things up. That’s all they do,” Bender said. “But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes—and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

Unfortunately, when all you have is a hammer, the whole world can look like a nail

LLMs are powerful tools that can do remarkable things, but companies and the tech industry must understand that just because something is powerful doesn't mean it's a good tool to use.

A jackhammer is the right tool for the job of breaking up a sidewalk and asphalt, but you wouldn't bring one onto an archaeological dig site. Similarly, bringing an AI chatbot into reputable news organizations and pitching these tools as a time-saving innovation for journalists is a fundamental misunderstanding of how we use language to communicate important information. Just ask the recently sanctioned lawyers who got caught out using fabricated case law produced by an AI chatbot.

As Bender noted, a LLM is built from the ground up to predict the next word in a sequence based on the prompt you give it. Every word in its training data has been given a weight or a percentage that it will follow any given word in a given context. What those words don't have associated with them is actual meaning or important context to go with them to ensure that the output is accurate. These large language models are magnificent mimics that have no idea what they are actually saying, and treating them as anything else is bound to get you into trouble.

This weakness is baked into the LLM itself, and while “hallucinations” (clever technobabble designed to cover for the fact that these AI models simply produce false information purported to be factual) might be diminished in future iterations, they can't be permanently fixed, so there is always the risk of failure. 

TechRadar – All the latest technology news

Read More

Meta was late to the AI party – and now it’ll never beat ChatGPT

Meta – the tech titan formerly known as Facebook – desperately wants to take pole position at the forefront of AI research, but things aren’t exactly going to plan.

As reported by Gizmochina, Meta lost a third of its AI research staff in 2022, many of whom cited burnout or lack of faith in the company’s leadership as their reasons for departing. An internal survey from earlier this year showed that just 26% of employees expressed confidence in Meta’s direction as a business.

CEO Mark Zuckerberg hired French computer scientist and roboticist Yann LeCun to lead Meta’s AI efforts back in 2013, but in more recent times Meta has visibly struggled to keep up with the rapid pace of AI expansion demonstrated by competing platforms like ChatGPT and Google Bard. LeCun was notably not among the invitees to the White House’s recent Companies at the Frontier of Artificial Intelligence Innovation summit.

That’s not to say that Meta is failing completely in the AI sphere; recent reveals like a powerful AI music creator and a speech-generation tool too dangerous to release to the public show that the Facebook owner isn’t exactly sitting on its hands when it comes to AI development. So why is it still lagging behind?

Abstract artwork promoting Meta's new Voicebox AI.

Meta’s AI ‘Voicebox’ tool is almost terrifyingly powerful – so terrifying, in fact, that Meta isn’t releasing it to the public (Image credit: Meta)

Change of direction

The clue’s in the name: remember back in 2021, when the then-ubiquitous Facebook underwent a total rebrand to become ‘Meta’? At the time, it was supposed to herald a new era of technology, led by our reptilian overlord Zuckerberg. Enter the metaverse, he said, where your wildest dreams can come true.

Two years down the line, it’s become pretty clear that his Ready Player Zuck fantasies aren’t going to materialize; at least, not for quite a while. AI, on the other hand, really is the new technology frontier – but Meta’s previous obsession with the metaverse has left it on the back foot in the AI gold rush.

Even though Meta has now shifted to AI as its prime area of investment and has maintained an AI research department for years, it’s fair to say that the Facebook owner failed to capitalize on the AI boom late last year. According to Gizmochina, employees have been urging management to shift focus back towards generative AI, which fell by the wayside in favor of the company’s metaverse push.

Meta commentary

A female-presenting person works at her desk in Meta's Horizons VR

Meta’s virtual Horizon workspace was never going to take off, let’s be honest (Image credit: Meta)

Perhaps Meta is simply spread too thin. Back in February, Zuckerberg described 2023 as the company’s “year of efficiency” – a thin cover for Meta’s mass layoffs and project closures back in November 2022, which have seen internal morale fall to an all-time low. Meta is still trying to push ahead in the VR market with products like the Meta Quest Pro, and recently announced it would be releasing a Twitter rival, supposedly called ‘Threads’.

In any case, it seems that Meta might have simply missed the boat. ChatGPT and Microsoft’s Bing AI are already making huge waves in the wider public sphere, along with the best AI art generators such as Midjourney.

It’s hard to see where Meta’s AI projects will fit in the current lineup; perhaps Zuckerberg should just stick to social media instead. Or maybe we'll see Meta pull another hasty name-change to become 'MetAI' or something equally ridiculous. The possibilities are endless!

TechRadar – All the latest technology news

Read More

Google Drive has suddenly decided to introduce a file cap – but you might never hit it

It’s official – cloud storage provider Google Drive has decided to add an official cap on the amount of files that can be stored on a single account.

Per Ars Technica, the limit, set at five million files, started cropping up for some Google Drive users in February 2023, despite Google offering no warning that the cap was being introduced, and offering a notification that wasn't all that clear at explaining what the problem was: “The limit for the number of items, whether trashed or not, created by this account has been exceeded.”

Said notification has evolved since then, and now reportedly reads: “Error 403: This account has exceeded the creation limit of 5 million items. To create more items, move items to the trash and delete them forever.”

Google Drive file cap

As of last week, the notification for one Reddit user read “Please delete 2 million files to continue using your Google Drive account.”

The new policy (which remains undocumented across all pricing pages) means Google Drive customers are being prevented from accessing the full extent of the storage they’ve paid for. However, it’s worth noting that 5 million files, in real terms, is a pretty big allowance.

For Google Drive’s 2TB offering – the highest personal plan available – the average file size across an account would have to be 400 kilobytes (KB). There are certainly instances where that may be the case – the storage of large amounts of record data, for example. But in the vast majority of cases, users shouldn’t run up against the limit.

Business users are even less likely to face issues with the limit. A spokesperson for Google told Ars Technica that the limit applied to “how many items one user can create in any Drive,” rather than a blanket cap.

Details were thin on the ground, but they also noted that the new limit is “a safeguard to prevent misuse of our system in a way that might impact the stability and safety of the system.” 

TechRadar – All the latest technology news

Read More

Hopefully you’ll never have to use this Microsoft Teams update

Highlighting emergency calls through Microsoft Teams should soon be a lot easier thanks to a new update coming to the service.

The video conferencing platform will soon allow admins to create customizable banners within Microsoft Teams that will alert users when an emergency call is coming through.

This should help such calls stand out immediately to users, particularly if their attention is divided between a number of other tasks.

Microsoft Teams emergency

In its official entry on the Microsoft 365 roadmap, the company notes that users will be able to acknowledge their admin's message by clicking on the banner within a Microsoft Teams call.

This will allow admins to phrase or word the alerts however they need to, which could be extremely handy for schools or industrial customers, who might have entirely different emergency categorizations.

The feature is still in development for now, but Microsoft has set an expected release date of April 2022, meaning it could arrive soon.

Upon launch, the feature will be available for Microsoft Teams users across the world on desktop and Mac platforms.

The news is one of a long series of improvements and upgrades made to Microsoft Teams in recent months as the company looks to ensure hybrid and remote workers are still able to get the most out of its collaboration tool.

Perhaps most usefully, Microsoft recently revealed that Teams users will soon be able to mute notifications whilst they are in a video conferencing meeting or don't want to be disturbed.

On a similar note, another upgrade concerns the addition of chat bubbles so that users wouldn't miss private messages sent during a video call, both 1:1 or as part of a group call.

Recent figures from the company suggest that Microsoft Teams now boasts over 270 million monthly active users (MAUs), as the hybrid working age continues to drive the platform from strength to strength.

TechRadar – All the latest technology news

Read More

Holopresence means never having to say, ‘Sorry, I can’t be there’

In Holopresence land you can be two places at once. One is sitting on a director’s chair in front of a green screen, sweating under a half dozen stage lights. The other is half a world away on a semi-translucent screen, addressing an audience who almost believes you’re sitting right there with them.

I walked across midtown Manhattan in the soaking rain to see ARHT Media’s Holopresence experience in person earlier this week. (And with water dripping off my hat and coat, I found myself wishing I’d done this meeting as a hologram.)

To be clear, what ARHT provides is not, technically, a hologram. It’s a canny projection system that employs mostly off-the-shelf technology, a proprietary screen, and special software to make people believe someone is sitting in front of you, as opposed to – in my case – Toronto.

He was never really there

ARHT Media is a Toronto, Canada, telepresence company that just opened its first Holopresence studio in a WeWork building in midtown Manhattan. They invited me for a look.

As I walked into the WeWork space, basically a vast, mostly unfurnished office floor, I was greeted by ARHT Media SVP Terry Davis and company CEO Larry O’Reilly, who was standing off to the side looking at his phone. O’Reilly looked a little odd, as though he was standing before a bright light that I couldn’t see. Suddenly he abruptly dematerialized and was gone — my first experience with this Holopresence technology.

I wanted to try this for myself, but before anyone could transform me into a Holopresence, Davis walked me through the technology's fundamentals.

“We’re a projection system,” Davis told me. Gesturing toward the cube-like set up in a semi-darkened space on the far side of a cavernous WeWork room, where O’Reilly had “stood” just moments ago, Davis explained that the entire system is portable and “breaks down into a couple of duffle bags. We go anywhere in the world.”

The cube that “virtual you” beam in and out from consists of poles, black curtains on the back and sides, and a special screen stretched across the front. Unlike a standard movie screen, this one is a nylon-like mesh with a high-gain reflective coating. “It’s transparent and reflective at the same time,” explained Davis.

Aside from ARHT’s matrixed software (handling multi-channel communication for various holopresences in real-time), the screen is the company’s only other piece of proprietary technology. Still, it is effective.

Behind the screen, I note a few props, including a pair of plants and some floor lighting. These and the distance to the back curtain create the illusion of a depth of field behind a Holopresence. “You have to have a certain degree of depth of field in order for your brain and eyes to perceive that parallax,” said Davis.

A world of Holograms

AHRT is by no means the only company creating virtual people for events, concerts, panels, exhibits, and families. There’s Epic HoloPortl, for example. It has white, booth-like boxes, called PORTLs, in which people appear to materialize. The effect is arresting. Davis, while not wanting to criticize Epic HoloPortls, called them “white coffins with no depth of field.”

He also noted that his product can accommodate multiple people from multiple locations on one screen, while PORTL fits one in a box.

Plus there’s the portability factor. A Holopresence system, which would include the screen, curtain, poles, an off-the-shelf projector (they were using a Panasonic DLP for my demonstration), and microphones and speakers, can fit in a large bag. It’s not clear how portable the PORTL boxes are.

Still, on the other side of a Holopresence presentation is someone sitting in front of a green, black, or white screen. They’re mic-ed-up, facing a camera, and, in my case, hunkered down under substantial lighting. Meaning that for a live Holopresence event, there are always two sides to the technology equation.

Davis told me that the technology they use to create these hologram-like presences is not much different than what we’ve seen with virtual Michael Jackson in Concert or Tupac Shakur at Coachella. In those instances, the projection was from the ground up to a reflective surface that bounces it off a giant screen. Holopresence’s projector is outside the curtained area, facing the screen.

Lance Ulanoff and ARHT CEO Larry O'Reilly

Lance Ulanoff and ARHT CEO Larry O’Reilly (beaming in) (Image credit: Future)

Most of ARHT Media’s clients are businesses, enterprises, and billionaires (there was an Antarctic yacht cruise where people like Malcolm Gladwell beamed in to talk to a select audience). Davis described multiple panels where they beamed people in from around the world. Back at each of their studios, panelists are surrounded by screens that stand in place of other panelists. If someone is seated to the left of you, that’s where the screen will be. They even try to accommodate height differences. If the speaker on the left is much short than you or, say, on a different level on the stage, they adjust the screen height accordingly. A feed of the audience is usually placed in front of the speaker. What they see is holo-panelists looking back and forth at other holo-panelists.

To accommodate large panels or events with large audiences, ARHT offers a range of screen sizes that can be as small as 5 feet and as large as, well, a stage.

ARHT does have some consumer impact. During COVID travel restrictions, the company helped a bridesmaid in England virtually attend a wedding in America. In New Jersey’s Hall of Fame, the company has built a kiosk where visitors can “speak” to life-sized video versions of Bon Jovi and Little Steven.

Still, ARHT is not priced for your average consumer. A single-person Holopresence can run you $ 15,000. For more people on the screen, it could cost as much as $ 30,000.

Beaming in

Lance Ulanoff in ARHT studio becoming a Holopresence

Lance Ulanoff in ARHT studio becoming a Holopresence (Image credit: Future)

After a power outage at the Toronto headquarters (no amount of tech magic can overcome a lack of electricity), we finally got ARHT’s CEO back for a quick virtual chat. The roughly 6ft tall O’Reilly looked solid. As we talked and he reiterated many of the points Davis and I covered, I found myself focusing on the image quality. Dead-on, it was perfect. From O’Reilly’s white hair down to his shoes, he appeared to be standing before me (on a slightly raised stage). I shifted to the left and right and found the effect holding up pretty well. Davis claims the projection doesn’t flatten out until you hit between 120 -to-140-degree off-axis. I’d argue the viewport is a bit narrower.

As we conversed, though, I experience another key part of ARHT’s Holopresence secret sauce: latency. The conversation between the two of us was free-flowing. Even when we did a counting test (we counted to ten with each of us alternating numbers), there was, perhaps, a sub-second delay.

To achieve this effect, ARHT uses low packet bursting transmission to create a smooth, conversational experience between people in Hong Kong and Australia or a reporter in New York City and a CEO in Toronto.

Lance Ulanoff materializes

(Image credit: Future)

One thing I noted throughout the demo were the references to Star Trek transporter technology. There was even a screen in the space showing a loop from the original Star Trek series where the team beams down to an alien planet. When you start a Holopresence experience, people “beam in” with a very Star Trek-like graphic flourish and sound effect. I asked O’Reilly if he's a Star Trek fan and what he thought about the connection. He didn’t answer directly and instead pointed out how the sound and graphics are completely customizable.

Finally, it was my turn. I sat in the green screen space and tried to look like I wasn’t about to experience a lifetime dream of mine. My beam-in moment was, initially, a little underwhelming. I couldn’t see myself; the Holopresence space was across the room. 

When it was over, I walked over, and Davis replayed my big moment. Seeing myself teleport into the room like a bald Captain Kirk was everything I hoped it would be.

Beam me up, Scotty.

TechRadar – All the latest technology news

Read More

Holopresence means never having to say, ‘Sorry, I can’t be there’

In Holopresence land you can be two places at once. One is sitting on a director’s chair in front of a green screen, sweating under a half dozen stage lights. The other is half a world away on a semi-translucent screen, addressing an audience who almost believes you’re sitting right there with them.

I walked across midtown Manhattan in the soaking rain to see ARHT Media’s Holopresence experience in person earlier this week. (And with water dripping off my hat and coat, I found myself wishing I’d done this meeting as a hologram.)

To be clear, what ARHT provides is not, technically, a hologram. It’s a canny projection system that employs mostly off-the-shelf technology, a proprietary screen, and special software to make people believe someone is sitting in front of you, as opposed to – in my case – Toronto.

He was never really there

ARHT Media is a Toronto, Canada, telepresence company that just opened its first Holopresence studio in a WeWork building in midtown Manhattan. They invited me for a look.

As I walked into the WeWork space, basically a vast, mostly unfurnished office floor, I was greeted by ARHT Media SVP Terry Davis and company CEO Larry O’Reilly, who was standing off to the side looking at his phone. O’Reilly looked a little odd, as though he was standing before a bright light that I couldn’t see. Suddenly he abruptly dematerialized and was gone — my first experience with this Holopresence technology.

I wanted to try this for myself, but before anyone could transform me into a Holopresence, Davis walked me through the technology's fundamentals.

“We’re a projection system,” Davis told me. Gesturing toward the cube-like set up in a semi-darkened space on the far side of a cavernous WeWork room, where O’Reilly had “stood” just moments ago, Davis explained that the entire system is portable and “breaks down into a couple of duffle bags. We go anywhere in the world.”

The cube that “virtual you” beam in and out from consists of poles, black curtains on the back and sides, and a special screen stretched across the front. Unlike a standard movie screen, this one is a nylon-like mesh with a high-gain reflective coating. “It’s transparent and reflective at the same time,” explained Davis.

Aside from ARHT’s matrixed software (handling multi-channel communication for various holopresences in real-time), the screen is the company’s only other piece of proprietary technology. Still, it is effective.

Behind the screen, I note a few props, including a pair of plants and some floor lighting. These and the distance to the back curtain create the illusion of a depth of field behind a Holopresence. “You have to have a certain degree of depth of field in order for your brain and eyes to perceive that parallax,” said Davis.

A world of Holograms

AHRT is by no means the only company creating virtual people for events, concerts, panels, exhibits, and families. There’s Epic HoloPortl, for example. It has white, booth-like boxes, called PORTLs, in which people appear to materialize. The effect is arresting. Davis, while not wanting to criticize Epic HoloPortls, called them “white coffins with no depth of field.”

He also noted that his product can accommodate multiple people from multiple locations on one screen, while PORTL fits one in a box.

Plus there’s the portability factor. A Holopresence system, which would include the screen, curtain, poles, an off-the-shelf projector (they were using a Panasonic DLP for my demonstration), and microphones and speakers, can fit in a large bag. It’s not clear how portable the PORTL boxes are.

Still, on the other side of a Holopresence presentation is someone sitting in front of a green, black, or white screen. They’re mic-ed-up, facing a camera, and, in my case, hunkered down under substantial lighting. Meaning that for a live Holopresence event, there are always two sides to the technology equation.

Davis told me that the technology they use to create these hologram-like presences is not much different than what we’ve seen with virtual Michael Jackson in Concert or Tupac Shakur at Coachella. In those instances, the projection was from the ground up to a reflective surface that bounces it off a giant screen. Holopresence’s projector is outside the curtained area, facing the screen.

Lance Ulanoff and ARHT CEO Larry O'Reilly

Lance Ulanoff and ARHT CEO Larry O’Reilly (beaming in) (Image credit: Future)

Most of ARHT Media’s clients are businesses, enterprises, and billionaires (there was an Antarctic yacht cruise where people like Malcolm Gladwell beamed in to talk to a select audience). Davis described multiple panels where they beamed people in from around the world. Back at each of their studios, panelists are surrounded by screens that stand in place of other panelists. If someone is seated to the left of you, that’s where the screen will be. They even try to accommodate height differences. If the speaker on the left is much short than you or, say, on a different level on the stage, they adjust the screen height accordingly. A feed of the audience is usually placed in front of the speaker. What they see is holo-panelists looking back and forth at other holo-panelists.

To accommodate large panels or events with large audiences, ARHT offers a range of screen sizes that can be as small as 5 feet and as large as, well, a stage.

ARHT does have some consumer impact. During COVID travel restrictions, the company helped a bridesmaid in England virtually attend a wedding in America. In New Jersey’s Hall of Fame, the company has built a kiosk where visitors can “speak” to life-sized video versions of Bon Jovi and Little Steven.

Still, ARHT is not priced for your average consumer. A single-person Holopresence can run you $ 15,000. For more people on the screen, it could cost as much as $ 30,000.

Beaming in

Lance Ulanoff in ARHT studio becoming a Holopresence

Lance Ulanoff in ARHT studio becoming a Holopresence (Image credit: Future)

After a power outage at the Toronto headquarters (no amount of tech magic can overcome a lack of electricity), we finally got ARHT’s CEO back for a quick virtual chat. The roughly 6ft tall O’Reilly looked solid. As we talked and he reiterated many of the points Davis and I covered, I found myself focusing on the image quality. Dead-on, it was perfect. From O’Reilly’s white hair down to his shoes, he appeared to be standing before me (on a slightly raised stage). I shifted to the left and right and found the effect holding up pretty well. Davis claims the projection doesn’t flatten out until you hit between 120 -to-140-degree off-axis. I’d argue the viewport is a bit narrower.

As we conversed, though, I experience another key part of ARHT’s Holopresence secret sauce: latency. The conversation between the two of us was free-flowing. Even when we did a counting test (we counted to ten with each of us alternating numbers), there was, perhaps, a sub-second delay.

To achieve this effect, ARHT uses low packet bursting transmission to create a smooth, conversational experience between people in Hong Kong and Australia or a reporter in New York City and a CEO in Toronto.

Lance Ulanoff materializes

(Image credit: Future)

One thing I noted throughout the demo were the references to Star Trek transporter technology. There was even a screen in the space showing a loop from the original Star Trek series where the team beams down to an alien planet. When you start a Holopresence experience, people “beam in” with a very Star Trek-like graphic flourish and sound effect. I asked O’Reilly if he's a Star Trek fan and what he thought about the connection. He didn’t answer directly and instead pointed out how the sound and graphics are completely customizable.

Finally, it was my turn. I sat in the green screen space and tried to look like I wasn’t about to experience a lifetime dream of mine. My beam-in moment was, initially, a little underwhelming. I couldn’t see myself; the Holopresence space was across the room. 

When it was over, I walked over, and Davis replayed my big moment. Seeing myself teleport into the room like a bald Captain Kirk was everything I hoped it would be.

Beam me up, Scotty.

TechRadar – All the latest technology news

Read More