The Looking Glass Portrait: A Volumetric Display That's Captivating Everyone
I recently had the opportunity to test out the Looking Glass Portrait, a volumetric display that can capture 3D images and videos with ease. The device uses a combination of cameras and sensors to create a high-resolution, interactive 3D image that can be viewed from any angle. One of the things I was most excited about trying was the ability to cycle through images or animations without having to manually control each step. This feature is still in development, but it's something that I hope will be implemented soon.
One of the things that really impressed me about the Looking Glass Portrait was its ease of use. The device has a simple and intuitive interface that allows users to easily navigate and interact with the 3D image. One of the things I found most interesting was the idea of being able to generate playlists and browse through images stored in the built-in storage. This would allow users to easily select which images they want to display, rather than having to cycle through a large library.
Another feature that caught my attention was the ability to upload photos from a smartphone or camera to the Looking Glass Portrait for processing into a light field photo. The device promised that this process will be streamlined and bundled with the Kickstarter campaign backers, which I'm excited about because there are so many images of portrait mode photos on my phone that I'd love to get uploaded.
I was also interested in hearing more about the software side of things. Specifically, I wanted to know how the Looking Glass Portrait would work with third-party applications to capture volumetric video content. The company promised support for displaying OBJs and STL files, which would allow users to import 3D models from sources like Thingiverse and display them in real-time on the device.
The prototype unit was beautifully built and felt very intuitive to use. One of the things that really stood out to me was the 58-degree field of view that the device provides. While this is technically a limitation, it's something that can be easily overcome by simply moving your head or shifting your position in front of the display. The image is also very clear and bright, making it easy to use in daylight conditions.
What I found most exciting about the Looking Glass Portrait, however, was its potential beyond just taking 3D photos and videos. The company promised support for displaying real-time content on a laptop or computer via HDMI, which would allow users to display 3D objects and render them in real-time. This could be used for everything from games to interactive displays of 3D models.
I also heard that the device will have built-in support for mounting cameras with depth sensors, allowing users to capture real-time volumetric video content. The equivalent of a Skype call or Zoom call, but instead of seeing a flat image, you'd see a 3D representation of the person on the other end. This is truly going to feel like the future.
In conclusion, I'm thoroughly impressed with the Looking Glass Portrait and its potential to revolutionize the way we interact with 3D content. The device is nicely built, intuitive to use, and has a lot of exciting features that are still in development. If you're interested in checking out more information on this product, I've included a link to the Kickstarter campaign below. Happy New Year!
"WEBVTTKind: captionsLanguage: enhey everyone it's norm from tested and first of all happy new year hope you all had a wonderful holiday and for those of you who were able to take a a break had a wonderful break as well and it's 20 21 and i wanted to kick off this year with a show-and-tell and in fact going hands-on with an upcoming piece of hardware that i'm able to have a prototype with here today so if you've watched tested videos in the past back at maker faire and even in our studio we've had conversations and checked out the products of a company called looking glass factory this is a team of folks who over the past decade have been experimenting with products and prototypes uh in holographic imagery uh things that can display volumetric images uh that you can look around it's really cool stuff but a wide range of products so whether it's like a matrix of led lights that they started with to back in 2018 they put out a product called the looking glass which was this kind of acrylic lucite rectangle this display that you put in your desk and inside it looked like a 3d image that you could look around like a light field display so not just a stereo image like when you wear with 3d glasses to watch a 3d movie here it looked like a 3d model or photos or video that live inside this piece of thick acrylic glass and for 2021 they're putting out yet another iteration of this it's called the looking glass portrait they actually have been running a kickstarter for it for the past month or so and it's ending in about a week and a half and they were able to send me an early prototype of the looking glass portrait so as in years past as in products past that we've shown off it's really tough to show a holographic image in a flat video a 16x9 youtube video if we're not filming this in stereoscopic 3d but i'll do my best to try to explain how this device works and what the experience is and i think even in this flat video you might be able to see how it's supposed to look so this is the one of the early prototypes hand-built prototypes of the looking glass portrait and basically it's a computer that runs a display and it's a raspberry pi 4 on the inside basically running uh what's almost like a digital photo frame but as opposed to just static photos that you take with your dslr or your smartphone these are volumetric images now the last looking glass product it was like i said this this piece of acrylic this piece of lucite that was in this volume and the images all look like they lived inside this volume the same kind of effect is happening here but the big difference is there's no acrylic it's a hollow space so there is a display in here and i'm actually going to shift to a second camera so you can get a better sense of it but if you look in this display they have this animation playing and as i shift the animation and rotate the screen here you can actually see that you're getting different perspectives so the way a light feel display like this is supposed to work so as opposed to like a stereoscopic video where you get two views essentially a left eye view and a right eye view which when they're matched with your what your eyes see you get um you get the sensation of depth right you get the sensation that you're looking at a right perspective and a left perspective of an image as you would see an object in the real world what this display does is it can display up to a hundred different views so if you think of in one of those old lenticular books like those children's books you have basically a screen filter over a printed image and so when you look at this printed image through that filter without 3d glasses you can kind of see you know either a moving image or some type of also a 3d image but that's one static image with only two views usually repeated interlaced here you have between 45 and 100 different views that the raspberry pi can process and so you get a much more convincing perspective and view of whatever photo or object or 3d model that they display in here to get much more range of movement uh the viewing area is about 58 degrees here so once again if we look at this image this is actually a light field image which essentially a combination of a bunch of different images uh combined together and displayed all at the same time so it doesn't matter what angle you look at it as long as you're within that 58 degree viewing cone so this would be outside of that an image fades away this would be outside of that the image fades away you get to actually look around the image and you can see it pretty dramatically in the shadow and where the hand blocks the logo on the top right it's only left and right so you don't get i mean the effect still persists a little bit but it's a little warp if you tilt it up and down but basically if i'm turning my head and moving around the looking glass portrait here it would look like this was a person inside of this frame so there are a couple things about the looking glass portrait that make it exciting for me as a fan of this technology and a follower of these prototypes they've been working on one is the price point for the kickstarter campaign this is priced at 250 dollars which is far cheaper and more affordable than the previous products they've put out it's still early adopter pricing but it's something that feels attainable and actually useful because of now the ability for you and i to take photos with our phones with depth mapped uh depth information so if you have like an iphone 10 11 or 12 or an iphone 12 with the lidar sensor in the back you can take photos flat photos and you can also embed it in those portrait images is a depth map that then you can run through software and have it display as a volumetric image on the looking glass portrait and i was able to send some of my portrait mode photos over to the looking glass team and they processed them and uploaded them to the prototype and it looks pretty good it's not as fully 3d as if you take you know those 45 to 100 different views and do a real true light field image but it's pretty compelling it feels very much like that minority or port display where you're grafting you know 2d images on a 3d model you can't necessarily look behind it they're going to be you know occlusion things you can't get around with the depth map but it their depth is there you know images faces look like they're living inside this display and i was able to also chat with uh the ceo of looking glass factory sean frayne over zoom uh to learn about the development of this product and their hopes for what the backers are gonna be able to do with it so let's take a listen you know the first looking glass product line that we launched about a couple years ago was really a developer kit so thousands of folks got those systems and really showed the way to what folks wanted to do with holographic display what was possible right around the corner with a few technological advances for this new type of interface and so about 18 months ago we started working sort of quietly on a system that could go on everyone's desk in 3d land so expanding beyond strictly unity developers and unreal developers although this system also is great for them but going into folks that are experimenting with 3d photography um folks who might have bought an iphone 12 pro because it's got a lidar camera and they were curious about what that can do so going from you know um tens of thousands of folks to tens of millions of folks and we put the pedal to the metal on development of this um you know about march or so when the pandemic started to hit because we thought that this was um something that folks would want to have not at their offices but be able to play around with and explore on their desks at home yeah and it's called the portrait you know it's still a kind of rectangular displays or the fundamentals of how the optics work how how the whole system works the same premise as looking glass except now shrunken down in terms of dimensions uh the fundamentals are the same in that um you know uh uh i don't how do i put a car has four wheels so this is another uh this is another car um in that sense um it still works by uh projecting out millions of rays of light so it is a light field display as opposed to some other types of three-dimensional technologies that would scatter light across against physical medium and other things that we've developed in the distant past so in that sense it's um an extension or a second generation of the first looking glass that folks might know from your show but there are a lot of big differences that we've rolled in that we dreamed about from the very beginning so this is the first looking glass that actually lets you touch the holograms so they extend beyond the physical bounds of the device itself so um we actually didn't find a good way to communicate this over um video uh but there's no block in this generation of looking glass so the it's sort of a hollow interior and the holograms that live within that and can extend in front of by a few inches and behind the physical structure of the device itself and it's got a built-in computer so it's got a raspberry pi 4 that actually can run a bunch of holographic media standalone and a few other um pretty significant advances on the software stack side of things i'm trying to wrap my head around the display side of it because the first generation of looking glass it was really easy to explain for people you know it was just a volume like described the block of the display and the images within were literally within that volume so when the really compelling demos you had you know animated goldfish for example swimming around it uh anything everything was contained which made sense with you know the kind of the the way you're doing the light field imaging how does this go beyond that and now so it um sends out so in in a lot of ways it's a similar fundamental technology to what the first generation looking glass was um in that system we sent out you know a few dozen different views or perspectives of a three-dimensional scene we do that by controlling the directionality of millions of small subpixels and i can go into more details on exactly what we do there in case folks are curious in this system we've cranked that up so between 45 to up to 100 different perspectives are shot out of the display in about a 58 degree view cone and that full light field data set is updated every 60th of a second so you can still have fully dynamic living holographic information that you see the big difference here is that with the looking glass portrait versus the prior generation of systems is we've made enough advances in some fancy stuff that we're doing on the software um side of things and also on the optics so that we can actually um remove the highly refractive um the the high index of refraction block of material that contained the holographic information before so it's almost as if we took that device and just took away the block and now the holograms can exist in front of and behind the physical volume of the display it's hard to explain and hopefully we got enough across in some of the videos so folks can get a sense of this we didn't want to you know uh do any post-processing or effects so everything that folks see online is just shot with either my camera or camera of some other folks in the team is it fair to say um that the block was a necessity of what the the output could do in that first generation to kind of contain it and can you know limit the the views or kind of lock it into that volume and now because it sounds like the processing allows for you know multiple number of the the views um your the restriction is gone and you can be more bold with what you want to project out there yeah absolutely i mean it's all you know stereoscopy plays a role so this is a super stereoscopic system in that we're pumping out a lot of different perspectives of a scene those are three-dimensional in nature but there's a lot of other depth cues that come into believing that something is real in front of you and a lot of three-dimensional display technology of the past have only focused on you know track stereoscopy or things of that nature whereas we try to focus on the full kit and caboodle of what makes the real world feel real so um that's a long-winded way of saying um yes it's all of these um advances combined that finally let us let the hologram break out of the physical volume the first looking glass uh was like you said for developers it was allowed for you know real-time um playback and and also animations here self-contained unit so a lot of the expected experiences as you guys talked about in your campaign the ability to take a photo with you know like um a lidar based depth sensing camera like people have on their iphones and then import that in with software can you talk about that process because those are different things right a depth map is very different than the kind of light fuel the slices that you guys generate sure so i mean the looking glass portrait supports two modes of operation so one is desktop mode so folks who are familiar with our first generation systems you know the looking glass portrait can work with pc and max in the same way plug it in over an hdmi and usb c cable and you're ready to go using the computational power of that computer and this is great for unity developers and what have you but for standalone operation all you need to do is plug in power directly you don't need to plug into a separate computer because it has has this raspberry pi inside that can run the computation fast enough and display holographic living and moving and also static information the easiest way for folks to get started making their own holograms for the device is what you're bringing up norm of a single portrait mode photo so um there's this sort of secret that's behind all portrait mode photos and that's that they are recording a depth map usually that depth map is only used to blur out the background behind the main subject sort of like what's happening behind you in your awesome camera that you have i don't have that awesome camera um but um you know uh that depth information can also be used to generate um the multiple number of views or perspectives necessary to generate a holographic image or holographic video so millions of folks phones have this capability and are already have already recorded many billions of depth maps behind these portrait mode photos that are extractable with this new software that we're releasing for free to folks to use that get a looking glass portrait called holoplay studio and um that's an easy way for folks to generate their first holograms for folks who want to take it another level that we do support um rail capture or panning shots which get a number of different perspectives like actually different perspectives from the real world and uh that can be piped into our software into a looking glass portrait as well the advantage there is that you can then capture and display things like refraction or reflections or specular details that aren't captured with a single portrait mode photo um and the depth map that that contains so you're talking about stereo images or actually way more than stereo and as many as you want to input in you know the old kind of matrix bullet time style exactly camera on a rail um and then the software that you guys are working with can interpolate that and interpret that into the light field image exactly so there's these two different sets of ways to capture the real world and make holographic photos or videos from those captures one is depth photos which are best carried by a single portrait mode photo but the other is what you're getting at basically the equivalent to bullet time photos which you can capture with a single photograph with a single um camera moving on a rail um as a a lot of folks in the community know or you can do it with the camera array and um in the case of the famous matrix bullet time capture um that is playing back in time um sort of this spatial capture whereas we take that play back in time and play it back in an instant in space and so we're really doing this um kind of transform between time and space recording um and then outputting that through a hollow i know that's the nerdiest way to possibly put this but um you know uh a movie is a holograph what is it what's this a hologram is a movie that's played back in space and time right right and as the user manipulates their own viewpoint by moving their head they are essentially scrubbing through um the viewpoints yeah exactly yeah exactly and it's like really powerful powerful concept that applies to all sorts of things once you have that idea that you could have a panning shot that captures different points of view in space and then you smoosh all of that into a single moment played back through a holographic display you can apply that not only to you know real world capture but you can also imagine oh what would it be like to move a virtual camera over google maps for instance um now there's holograms that are hiding everywhere that can be played back through the system you know it's it's not um a feature that's supported by iphones currently in terms of depth mapping combining that with video but is that something that's a possibility where you can you know the software can take advantage of a depth map either generated by lidar or stereo but have it at multiple frames a second and then that processed uh to play back at some frame rate yeah absolutely and it actually is supported by a small number of apps so um we actually just unlocked this as a sort of free thing for folks in the community that are getting these new systems because we hit the stretch goal um on our campaign and so on so we we've partnered with a uh um an app maker um that made an app called record 3d and this works with iphone 10 11 and 12 both with the true depth front-facing camera and also the back-facing cameras some of which are lidar enabled um so you can record um video like minority report style videos i actually was recording my two kiddos the other day decorating the tree and it's crazy how and then i put it in looking glass portrait it's crazy how similar that feeling is to the original minority report videos um you know where tom cruise is looking at um his son sort of projected out from the wall as a hologram so so folks will now be able to do that without any programming whatsoever just through this app and then through our software and those third-party apps what they're outputting is it essentially a depth map but at multiple frames like one for every framenow they're also on the marketplace right now a bunch of stereo cameras like vr 180 cameras that you can do at up to 60 fps stereo imaging is that something because you can get a sense of depth with that it's not the same as a depth map but are those compatible so um we have direct support in holo play studio which is again is sort of this um uh central hub of conversion of different types of 3d capture and 3d formats into a holographic format suitable for loading on folks looking glasses we have support in there rolling out in q3 as a software update for stereo photos so folks will be able to drop in a stereo pair and immediately that will be what's happening in the background is the depth map is generated and then we extract out the number of different perspectives necessary to display glasses free in a looking glass portrait it also works with the other looking glasses too that folks might already have and what we don't directly support is 360 3d video although we have run that in our first generation looking glasses um with experimental apps and folks in the community have made a bunch of these so i expect in 2021 um folks in the community are probably going to release some players for um looking glass portrait for 360 3d video but we're not officially supporting that at this time but if folks are taking stereo shots that should be there should be no this is going to be the easiest way for folks to view those that hopefully has ever been around yeah no need to put on any type of headset or anything or or wear some type of stereo glasses um right for for changing the image um is that a compute heavy process for hull play studio to to output the light field image based on um a photo and depth map or video and depth map and i guess what i'm getting at is you know how soon before this can be done in relative real time so people can have video conversations but with depth data and and then output it on on the portrait so um folks are already experimenting that experimenting with that like holographic zoom or what have you um using depth cameras like azure connect or real sense cameras or even some of these newer iphones um with their current generation looking glasses so um it's pretty clear that there's going to be a lot of that with looking glass portrait um it'll be experimental at first so it won't be something that's that we're releasing as a product but folks who really want to get to the edge of the frontier will probably be able to experience that in 2021 thanks to the community that does require being connected to a pc or mac for doing that heavier duty sort of streaming processing um it's not too far away before that could be completely uh completely standalone um but it's it's kind of tricky to speculate exactly when and to get it to the price point it sounds like there's a confluence of available technologies things like the raspberry pi things like some manufacturing efficiencies and and things that get it to that is that what you guys are always looking at now with the fundamentals down to kind of look down the line and see okay you know what's going to be available what's going to perfectly match so we can get a product that can deliver this type of experience um and going forward oh yeah i mean we always have a bunch of stuff uh cooking in the background um you know the the stuff that we can release publicly um is probably i don't know like 18 months behind the stuff that's sort of cooking in the background or so so um with this launch we wanted to prove that it was possible to have a much more affordable system i mean this is under a few hundred dollars for folks to get access to that could have this standalone functionality so you didn't always have to be tethered to a piece of your mac to have holograms living on your desk and to really lean into people and characters in real world capture being a significant part of the holographic experience because a lot of what we had been doing in the prior 18 months was around synthetic content which is great but um you know capture and display of the real world is that's that's a big part of the dream of the hologram that's what we see in all the movies so this is a carrier of that dream in a lot of ways um so a lot more to come on top of this but um we've got our hands full right now we're delivering this to folks totally and in terms of the manufacturing pipeline you guys are working with prototypes right now um but how will the final versions differ or where are you guys in in the production pipeline to ensure some delivery next year yeah so um like i mentioned this had been in development for about 18 months and then we just dramatically accelerated the time frame earlier this year earlier in 2020 so we've made um i don't know counting the very early prototypes coming up to probably 100 prototype units and we're testing about 50 early beta units right now in the lab and that's just to run it through its paces we're getting those to a small handful of um folks that we've known in the community for a long time that are testing some stuff for us and so on um and then we do a shipment at in january for 50 folks from the kickstarter who purchased advanced beta units and they'll get early access to the software and so on as well that they can use to um load on their own holographic info into their looking glass portraits and then the main shipments the main production shipments go out in spring um 2021 and we're on track for that it's obviously a particularly challenging moment for making hardware um but we've done this enough times that we're confident in um sort of the game plan that we're rolling forward with and but folks can ask this question i mean we try to be as transparent as possible if folks have questions we even like show them into our our lab and what have you um folks can ping me directly and i'll give a frank an answer for wherever we're at awesome yeah i really appreciate that and thank you so much for the time here it's really exciting to see the campaign take off and the the kind of uh the the features you guys are unlocking um with those stretch goals uh it makes that product ever more exciting sean it's great to see you it's great to check in um thank you so much for continuing to develop this product and making it accessible for folks like us who are really interested in it thanks norm really appreciate it so i want to thank sean for getting online with me and chatting about their journey developing this product but now let's take a look at the hardware and one of the first things that struck me is just how nice the the fit and finish of this pre-production unit is if their shipping units are anything like the build quality of this prototype i'm going to be really impressed but look at the hardware like i said in the back this is where the computing unit lives the raspberry pi 4 and you can see there's some fins here for some passive cooling you can also also notice there are some slots here for you to hang it uh on a wall if you want to do so but i've been having this displayed on a stand and the stand actually is magnetically attached to the back here you pop that off and then very easily slots onto two screws on the bottom like so and displays at that ideal angle for you to view this while it's sitting on a table within that perfect sweet spot that 58 degree cone of view there's also a quarter inch mount on the bottom if you want to put this on some type of stand or tripod stand or something like that on this side there's full hdmi port there's usbc for power as well as data for you to transfer your photos over or directly tap into that raspberry pi 4 and also an auxiliary audio out um speaker jack essentially uh for people who want to create videos volumetric videos that have some audio playback um on the other side there's two buttons there's basically a power button and underneath that there's a button that adjusts the light around the frame so if i press that you can see there are five settings for brightness and this light around the frame does a lot to help improve the illusion of the depth effect you can actually also see around the edges here sean talked about it's mirrored so while there is that screen and that display it makes the image look larger and deeper than what the actual physical dimensions of this unit is because of those mirrored sides and the split itself is very bright and off ax you can see also very reflective and what this kind of rectangular light does is also as you can see as i tilt this off to the side it creates the illusion of this border around the entire volume of the image or where the image inhabits uh kind of replacing the need for that lucite acrylic that they had in the previous unit so it's actually an empty cavity here i can put my hand in there and touch the display if i want and you know over time it does seem to pick up a little bit of dust easy to blow off dust doesn't affect the quality of the image and so from you know three feet to four feet away it looks fully 3d and pretty sharp but if you do get up close to it you know like a foot away you can see that the way they get the multiple views means that they do have to split the resolution of their display into those multiple views so what you're actually seeing at one time is a relatively lower resolution you can see the pixels basically of the display of the image on the sides here there are three buttons so the top two buttons cycle through the library of images and they're gonna include a bunch of 3d models and animations for backers and then this bottom button basically stops and starts animations or cycles them or freezes them so for example for this one you can stop pause it play it or if i hold it it will have it not cycle or have it cycle again um one of the things i really hope for is way for you to generate playlists and browse through if you have you know a thousand different images which you can store in the built-in storage um you're going to want to be able to get to the one you want rather than cycling through all thousand of them just using these two buttons on top so the part of the experience i'm not able to test right now is the software side of things what is that workflow for getting the portrait mode photos off of my phone and run through the software process turned into a light field photo and then uploaded to the looking glass portrait and they promised they'll be streamlined the software will be bundled for the kickstarter campaign backers so if that is relatively easy because there are so many photos portrait mode photos on my phone i want to get on this and if it's really straightforward to capture even volumetric video with a third-party application then that makes this incredibly compelling because even with the kind of bundled images and animations and the few photos that they ship with this prototype unit for me to check out it's been something that's captured the attention of everyone i've put this in front of you know in my home i've had it in my living room and my kid loves looking at it and that 58 degree field of view that sweet spot you have to be in to see the image yes that's technically a limitation but in using it it's very easy to overcome it doesn't feel like the vice is broken and it's so intuitive to just move your head and shift your head and see the image it gets very clear very bright works great in daylight and it's something that has really captured the imagination of everyone i've put this in front of but the more exciting potential is beyond that beyond just taking 3d photos and videos and putting on this and doing something more interactive having this plugged in over data or hdmi to a laptop to a computer so you can display things and render things in real time and with the previous looking glass that they had which is essentially the same technology you could be running you know essentially games on this as well and displaying 3d objects and they're promising support for displaying objs and stl so imagine downloading something from thingiverse and as opposed to using a mouse to you know turn do a turnaround for an image you can actually have it displayed and have and look around it with your head on a volumetric display and then even beyond that something that i really want to see happen on this and sean talked a little bit about is mounting a camera with a depth sensor like an intel realsense camera or connect azure and doing real-time volumetric videos so the equivalent of a skype call zoom call a facetime call but you're looking at a 3d representation of the person you're talking to if they have another looking glass and also a volumetric camera on their side of it that's really going to feel like the future so that's the looking glass portrait it is a product that's you know it's a real thing this prototype is really nicely built and i can't wait to get my hands on a final production unit but in the meantime there's still a little bit of time left on their campaign i'll include a link below in the description and the comments if you want to check that out but thank you so much for watching and happy new year we've got so much cool stuff planned we can't wait to share with you so we'll see you next timebye\n"