PROJECTIONS, Episode 13 - Avegant's Lightfield Augmented Reality Prototype!

Companies interested in purchasing volumetric displays can design and demonstrate their products to third parties, allowing them to charge a higher price. However, the technology itself is still relatively new and relies on third-party developers to create tracking systems.

The company behind volumetric displays wants to be a display manufacturer that supplies its technology to other vendors. This approach is similar to Valve's strategy with its virtual reality headset, BR1, which was developed by HTC. Companies will likely want to understand how the volumetric display works, but the exact mechanics are still unclear.

According to the company, the HDMI signal from the computer contains a two-dimensional image that is one layer thick at any given point in time. However, this system can interpret depth data and extrapolate it into multiple layers of focus. The company claims that this technology is not dependent on expensive optical hardware, but rather custom-designed components.

The volumetric display works by sending metadata with the HDMI signal, which is then used to create a three-dimensional image. This process happens in the headset itself, where a smart device refractions light to create the multiple layers of focus. The eye receives multiple depth of information all the time, just like in the real world.

The company's technology uses convergence and refraction to focus on specific points in the display. Convergence is the process by which objects appear to converge as they move further away, while refraction is the bending of light around an object. The headset is able to focus on different parts of the image at varying distances using these techniques.

The company's goal is not to create a product yet, but rather to develop this technology further. However, getting a taste of this technology has been exciting for those involved, and many are eager to see what comes next in terms of advancements and potential applications. The use of 90Hz tracking and displays for VR was similarly groundbreaking a few years ago, and it's likely that future developments will be just as impressive.

For those interested in learning more about volumetric displays, the company is open to sharing information and answering questions. In fact, the company has already demonstrated its technology with a demo that impressed attendees. By posting comments below, viewers can share their thoughts on how this technology could work and what it might take to bring it to market.

"WEBVTTKind: captionsLanguage: enhey everybody its norm from tested Jeremy from tested welcome to projections our show about virtual reality and this week more specifically about augmented reality so we've seen quite a few new augmented reality technologies now we saw the meta - I've used hololens and this week we got to use something from the company of a gand these are the guys that made the glyph yeah which was not an AR device or our device it was just it was more like a portable television which would strapped on your head kind of like Geordi LaForge and it would beam images directly onto your retinas and they have pivoted from the glyphs so there's knowledge E was what they call retinal imaging which was basically - DLP projectors made by Texas Instruments 720p projectors that were balanced around mirrors and went right into your eyes no screen you're looking at it's a really high fill factor a really bright display but they've adapted those for that projection system into an augmented reality headset now it's really tough to explain and show what that means so maybe the best thing for us to do is to show you our demo and our conversation with Avedon CTO about their technology a Tom again we've really focused on bringing new display technologies to the market so in the past you might have seen a product we shipped from cliff those primarily around bringing new display technology what we call a retinal imaging display to the market basically instead of using a physical screen we use millions of micro mirrors to actually project light directly to your retina there's some really incredible advantages to that type of technology that you might have seen in a product like super bright high-resolution images very vivid and with incredible performance like really high high frame rates and low latency and that's what we did in the past is taking these brand new display technologies to the market so now what we're doing is really focused on new mixed reality technologies we've taken the core of retinal imaging technology we have the glyph and now we're transforming that into these transparent mixed reality prototypes and the new thing that we bring into the table here is a new light field display the ability to bring light field technologies to mix reality it's really an enabling experience for the industry and really excited to like show you what that experience is like so with the with the glyph with your last product the image was projected straight into my eyes by I seen it was in front I was obscuring the world so there was no way to do mixed reality with that so now you're taking the same kind of projectors and but you're projecting them through a lens that that allows me to see the real world that's exactly right so we're actually projecting the image off of what's called a combiner and what that allows us to do is to combine the light from the real world from the outside with the light that we're projecting into your eye so now we're actually mixing these two the virtual light virtual images with the real world now that's what's going to allow us to actually mix it in virtual images into the real world experience and by adding light field to that it's going to add a whole new level of realism and enable a lot of new experiences that you couldn't have before it's something the two things going on and what you guys are just talking about is you could have turned the retinal display technology to DLP projectors essentially and made it transparent by using like a partially silvered mirror right that's what combiner is but light field is a whole separate thing yes so we explained what light field display technology is what the experience is supposed to be sure there's a little bit of background about like how we got to this point right what's kind of interesting is you know we always knew that the sound of transparent mixed reality displays really is going to play an important part in the future of where these type of headsets are going right toward contention replacing your smartphones and computers down the road what we found is that you can make a transparent display look pretty good and with our original imaging technology actually really high quality but we found was that without light field is actually very difficult to make a really compelling experience and part of that comes down to how we naturally see the world you know we think about how we perceive depth how our eyes are actually seeing how far away objects are there are a lot of them depth cues that we have but we start looking at things up close like within a couple of meters we're going to becomes important is the concept of convergence and accommodation right basically the angle of my eyes which is the convergence and the focus in the eyes which is the accommodation now when we grow up as kids when we're developing our binocular vision these two things are really tightly coupled together you know if I if I look at my finger you can see that the eyes are converge at this point and each one of my eyes are also focused at this distance these two need these two properties need to be really locked in lockstep right and the issue we see in 3d displays today these non light feel displays both in vrn and AR is that they they break these apart and this is a pretty well known problem they call it the verges accomodation conflict right and and the issue with having this conflict is there's some physiological problems you know a lot of people get eyestrain headaches it's just like going to either go to 3d movie and put on those 3d glasses a lot of people don't feel don't feel good when they can do that and like to feel a lot of ice cream and stuff but the other problem that's actually a bigger problem as big of a problem in mixed reality is when you're looking at transparent displays if I want to put a virtual image into your real world it needs to look like it's really in the real world so if something is sitting on the table in front of me it actually needs to look like it's on this table or in my hand so that your eyes can focus not just the object that everything else is surrounding it at the same time that's right and so if you want to have a virtual object in the real world in the real world that looks realistic it needs to match all the properties of light that are around it right if it doesn't match then you have a lot of problems things don't look realistic and it can really start to make you feel sick right you know with this versions accommodation this matching and get headaches and eye strain potential even nausea in in in the mixed reality space it becomes a really difficult problem to solve because the real world is always there and it's completely unforgiving it is what it is it's always this ground truth reference point so if I'm going to bring in a virtual object it needs to look correct at all times this place today aren't really doing that you know there aren't really any lights of solutions out in the market yet and so well we've seen some pretty interesting AR products out there the fact that their fixed focus and they don't have like filled in and don't solve this accommodation problem yet really limits the type of experiences that we can have right so we've seen a lot of these products out there a lot of these are pretty cool but you also notice that the experiences are limited to generally things that are a little farther away you know generally one meter or one and half meters and beyond right and that's because their display technology they're the way they're bouncing whatever display off the mirror and or projection onto the transparent visor that's just distant that's that's locked down like most of the accommodation changes come within a meter is that like there's a lot of changes in your at that depth is that what exactly it's kind of this like the way you know you measure focus in doctors right governor to helpers of focus you have and it's it's one over distance so if you look at some one meter then that's one to help there if I'm at infinity that zero doctors so you can see from like one meter to infinity there's only one dr. change if I go from one meter to here it's like ten dollars of change so you can see like how sensitive your eyes are to focus in this new range that's part of the problem on why the stuff is so important is is you need to get things very accurate there's a lot of focus change dr. change it's how within a meter that's funny because in VR and they are like that distance is where it becomes really interesting your field activity on your desk in data view interactions so without light field if you that like will display AR would work fine if everything was projected into the horizon and we're continuing Finiti or like those images are dynamic that's exactly right and and you know we think that people think about you know there were mixed reality space and think about all these amazing experiences right I mean people are kind of have these amazing imaginations about what they want to have these experiences and the truth is without a life to display you're actually pretty limited a little and one you can show that's why I think we think that like field is a critical part of the success of this industry going for it and I think that you know if you want to build and successful products and have really great compelling experiences no matter what they are like field is kind of a critical component and critical part of that well so implementation then as we chart the prototype which I the demo and it was compelling we could see for example an object close and an object far away beyond our arm's reach and naturally shift between here and that even with one eye closed even with one eye closed that's not converging right right so how is that happening assume the system note the rendering engine knows what that difference is it's just a unity engine that information is being passed through how's that image then running it through your optical system so you know what we've seen is that people have been trying to solve this by field display a problem for for a long time now many years and you see some pretty interesting research coming out of academia these days when you look at those approaches you know we didn't find an approach that really kind of checked all the boxes right looking for something that is going to be you know not super expensive pretty little cost you need something that's very high quality and something that you can bring to market you know actually talk about things like supply chain quality control manufacturability and those are some serious issues and on how to get technologies from research phase actually into products like like we've done in the past and so because of that we had to create a new approach we invented a new approach to creating like field that doesn't require very high computational power you know it doesn't require any mechanical moving parts so yet we actually have come up with this pretty neat solution that actually can can create this light field display that you guys experienced it was really nice about it is what we're doing effectively is recording this this fixed point volumetric display it's a volume display volumetric display fixed to the eye boxes in front of your eyes and because of that it's generating essentially all the all the different focal planes that your eyes can naturally focus that with an infinite bill compliance all the same time correct is infinite number of focal points but it's a digitized LED field right so instead of like life it was an analog signal but if I can digitize it enough to a point where your eyes feel like it's it's continuous then it's a good approximation of what that should be so are you rendering a discrete number of layers yes so we are described where render greatest freedom over layers but we're doing them we're doing to you it's all simultaneous right and you'll see in the demo that there's a lot of points in the demo where you have continuous objects that can get up close to you and it's like continuous from here to here and focus different points is even points where you might have you know dozens of objects kind of in your field I'm very close like inches away all the way up to effectively Definity you notice your eyes can just very naturally pick and choose which focused to look at if a number of layers is updated by the projection system and the resolution of f checking says there's always trade-offs in the system and you know I think that the stuff is always going to develop it but I think what's important is not really necessarily the exact implementation of it what's important is can you get to a good enough experience a compelling experience for the user in a way that's practical to actually ship into products right one of the things that's pretty interesting I think you kind of brought up around using the one eye thing is because we're generating this volumetric display we don't need to have eye tracking right and it's pretty interesting to have that experience where you may see closed one eye you can naturally pick and choose and adjust your focus between the endpoint that's here with eye tracking is introduces latency yeah I mean I think eye tracking is important part of these types of systems you know it's it's not necessary to generate a light-filled display but there's are there are other benefits too I feel you know people are already exploring that stuff in the VR space so look at things like foliated wandering it's always good to reduce computational load definitely but if you're going to tie it to the accommodation and it really tight Leighton Fiona it could be potentially right now yeah yeah yeah it also seems like the volumetric display area is tied to this thigh lock how much how much remains on the field of view you get how much you can move your eye around is that right it's kind of independent of the length you'll thing you know any kind of near eye displays generally will have a certain defined like box right so any display will have a certain kind of eye box basically it's the area that your eyes your people can move around and still capture all the Rays there that libraries are coming out of display you start having a really small eye box when you look too far what you don't want is to start losing parts of the image when you're looking at a certain part of the image right other parts of the image just back to this multiple layers think about it yeah I don't understand how there aren't there is an interference between the different layers I mean how are they not how is there not how am I not think to at the same time ghosting over one another something you are you're seeing all from the same time right it's just like think about the real world you are actually getting multiple focal desert images all the time so when you're looking at me for example you're focused on me I'm totally clear right but you're also getting light from behind me all right a bunch of different lights behind me at all different focal distances right but they're all coming from different distances are different positions yeah with your just with your projector is how is that coming from different positions as well but just on a smaller scale it is coming yeah we are recreating you know this volume of light coming in from all these different angles but now these are optimized for home at its place so they're optimized for the position of your eye huh right it's just like if you're looking at me right now if you don't move your eyes you can still like see stuff behind your still focus back there - right right it is that volumetric display system that something that's tied to the retinal display system actually it's not so it's pretty interesting because this optic is this light fuel optic is independent from what we're doing from the retinal imaging side but no right now we look at the the micro mirror approach as being probably the best approach today as far as like resolution so factors latency and brightness there's a really significant advantages to what we're doing right now and retirees together right now because we think it's the ultimately the best solution now you're running rendered image really off of the Unity engine through a filter to some computations happening there how computationally intensive is it to display for like field it's actually pretty low so the way that we do is pretty interesting the computation of loads we're doing are actually low enough that we have stuff running on mobile chipsets too so today the demo you're seeing is running on I would call it VR PC right so like Windows PC you want immunity with a nice GPU in it but we also have it running on things like nvidia tegra or snapdragon right so it's pretty interesting to see that the approach that we're taking from light-filled can also run on mobile chipsets so it kind of opens up a lot of applications each new layer that you introduced it's not it doesn't double your computation so it's not quite that not quite that forward at the Jenna here and and then the the area of coverage does field of view right now I've learned in the prototype stage but it's a goal for the version of AR and my field AR that you guys are targeting is it full coverage for field of view earth as wide as possible is that what you're saying I mean I think ultimately the industry knows where it wants to go like we want to get down to something that's very thin and light like a pair of glasses with a huge field of view and that's going to take a little bit time for the industry to get to and step by step I think more people are going to solve these problems right now we're very focused on solving a light-filled problem which is I think a really big step to to making a great experience what I'd say about things like field of view is for us it's something that you understand pretty well and there's a lot of adjustability in the system right and you asked a little bit before about things like have you looked at different kinds of chips and different resolution stuff and these are certainly all pretty simple possibilities that we look at and that was throw the trade-offs to you know the size and weight of devices cost of devices I mean you can easily go to larger fields of you and if you're willing to trade off some other things they're all manageable cost that will get better over time the approach you're going isn't going to be limited and the and so it could potentially happen with the looking at the prototype like looking up close to how you guys are sent the mirrors it looks like you guys have the lights from your rental displays coming coming down you know upon opacity and how much you want to see of the real world versus the the projected world narrow that's going to be that's a trade-off as well the demo we saw was in a you know indoor in Beverly little environment are you what what are the limitations right now and where do you see improvement sure well I would say first of all these are still prototypes so we're kind of giving you guys a little bit early look on where things are and put prototypes there always are pertinent issues so you probably have noticed a little bit of artifacts in there some of the things without or definitely you're planning and improving or things like the transparency you know it's definitely not as transparent as you want it to be when you say transparency you mean of the so I can see the real world not transparency of the augmented objects exactly of it right right the objects actually seem pretty solid it's you know somebody's barn out big steps it can't see through them which is pretty nice but actually being able to see the real world a little bit better than what we have now so obviously there's a lot of improvements that that are in the works right now it's not something really worried about but you know one of the nice things I think about why we like using the rental imaging technology we're using is because as a reflective display it's really easy to couple in a lot of light right so the system right now is actually really low power and by using a reflective display if you wanted to go down applications of saying like let's say outside or it's really nice and sunny then we can absolutely put in a lot more light for stuff like that so it's pretty interesting to see a lot of the core advantages to using the retinal imaging technology not only in the resolution advantages but also brightness and speed yeah one of the appeals of your first product list with that I said switching ology was different than you know of your headset and looking out at your phone but the inputs were all Universal you can plug it into a phone or slug it in to a quadcopter for this because you have to run through some computation or you focused toward a are as we see tethered to a computer or this a technology that you see can bul swirl you know we think that a are really is going to be the future at least type of devices right and I think the a our world is not necessarily going to be a plug-and-play fight because we look at things like the light field you know when you put a light field in it basically automatically requires that you need light field input into it right one of the nice things that we've been able to leverage though is some of these platforms that are out there that people use such as unity unity becomes kind of a like a unifying platform and a couple of big platform like unity and unreal and a couple others basically is like what what all these different hooks and developers are all tying into you so it becomes really a nice platform to start out with and saying hey if you can develop on unity which is cross-platform right on mobile and PC levels if you can develop on unity if you can create content on unity then with our rendering plugins all the way fields just can be better in the background what is a light little thing look like I mean a technical level if I were to plug it into a TV and see your HDMI signal is it a video signal slots metadata or is it extra visual data there's a lot of different ways to encode that information I don't know what it look like actually point into the TV you know yeah I mean that was an extra visual data it like is it like a 3d signal where you've got extra video data or is it just extra you know it metadata it's got extra video data the extra data that's being sent is not a visual data extra data that's being sent is depth information about about the images right so you're going decode on the headsets themselves that's right before it goes into the projectors yeah that's exactly right now you know when you think about it like an engine like unity for example it's generally the entire 3d scene right so knows where everything is rhinos you know the depth of every single pixel in that scene but when it outputs to a standard display it's not output anything depth it's like taking a camera viewpoint and then setting out the 2d image of that viewpoint right so what we're doing now is course we take that viewpoint but we also ask for all the depth information of all the images in there it's also sent over to you now the heads that can decode that and then turn that into that volumetric image I see that's what the dream is for like field cameras right they take that depth data as well the image yeah so he simply was like through cameras we're basically doing what electric camera is doing but in opposite right you're doing it using 3d rendering right yeah and I think 3d is interesting because I feel capture is going to become more and more prevalent right so you see companies like white row and Oates always doing some rendering stuff so it's pretty exciting to see like as the as the captured light field content starts becoming more available it's going to be interesting to see like what that stuff looks like in my field of displays I think you know these very harmonious kind of application but it's definite not a requirement for getting the benefits absolutely not and like what you saw today is almost all existing unity content from an existing from a lot of existing VR VR experiences a lot of content from the standard unity store you know because unity already knows all its information about this content we're just asking it to output it to us we can we can render it correctly on our side yeah I mean you can work on this technology forever until it's perfect I mean until cost come down the inter 8 an iterator Android is you're clearly at a point of the prototype when you're ready to show people in public so is this how close I guess is this something that you feel can be in a product it's it's close I would say sit tight we'll make some announcements later this year but we're really excited because you know I think people are starting to hear about wide-leg field displays are so important there's been a lot of interesting hype around that but we just want to show people like I think very few people ever seen that experience and they conceptually can understand it but it's very different when you actually put it on look at it and experience it and that's that's kind of what we're out to set out is saying light field technology is great we think it's required for a good AR experience and good mixed reality experience and we want to start showing people and say hey this is what it is and start spreading spreading the word right what markets do you think AR is right for in the near term of it like 2017 2018 in the near term I mean the enterprise market is already desperate for the stuff I mean you're already seeing enterprises adopting the color lines all of the place right and in like industrial applications engineering and medical I mean that's just going to take off right it's interesting I think is to think about what's going to happen the consumer space right right as the consumer space I think the challenge is what are the key use cases that people are really going to see a lot of value at afraid what you know what we're going to do with these glasses that I put on that is going to be much better and I'm willing to put something on my face rather than these in my my phone or my tablet right and how'd you get the cost to a level where the consumers can justify it right yeah exactly and it becomes are really kind of a balancing act between cost quality size of the device form factor and application write an application versatility so it's not just luck to whatever platform you're one person developing it right already be adapted to existing content yeah I mean if you look at even the glasses you're wearing right now right I would bet that if you if you if you didn't have to wear them you wouldn't be wearing them right they're still kind of a pain to buckle I put on our mats available yeah all right all right that's rigea sized but you would agree that the consumer product is further off than the next two years I think of the industry they will know where the industry at today is his Enterprise is using it right now right and it consumer market isn't existed yeah right so it'll take a little bit of time I think for these markets temperature thank you very much for a double-play thanks okay Jeremy what did you think all right so this is something new yeah I got to give me a plea credit for this being able to focus on something that is clearly augmented reality but with one eye closed is something that I've never experienced before and I've been waiting for this like I've been waiting for this since virtual reality and certainly with the other augmented reality displays I think that they solving this what does he call it the virgin's comedy yeah vergence accommodation problem yeah is very very important in order to create a very natural experience in order to ruse reduce headaches I mean I think people are susceptible to this kind of dissonance without even realizing it right so it actually applies to both VR and absolutely and in VR not having a light teal display you can get away with that because there's nothing to compare it to you're always focusing the image to infinity based on the lens and things still look like 3d and you still get a sense of depth because your brain has been and grow your lizard brain works with the convergence queue right Anna do is graphic image but it's experiencing 3d in the same way that we've been experiencing it since the 1950s with an igloo with glasses it's just convergence it's making your eyes do this when things get closer and this wooden things get further away I which for some people creates eyestrain it's not as comfortable right and the way your brain actually wants the sense depth is a combination of many cues convergence being one of them accommodation being the other one which is the way that your actual eye lens actually reflects yeah it's a simple thing if you hold your hand up to your face and you change your focus with why not change your focus from the background to the foreground to the background to the foreground that's accomodation and that's what camera lenses can do like it as how they keep depth of field how you and you know cinematographers forever then controlling what you're focusing on and you've never been able to you have to up until now you've always had to control what people focus on in VR and AR but with this technology it's much more natural it allows the user the person wearing the device to focus on whatever they want to it's not just more natural it's almost more essential I mean the two AR displays that we tried recently meta being most recent one in avocado is a light-filled display and one is it and meta while the tracking I think was it's better and they give you more they create a bunch of software for it we it didn't solve the light-filled problem because the objects were you would still be focusing on them with convergence you would have a very clear you know floating brain in front of you and when you got closer to it it would still be sharp but everything else wouldn't necessarily match the focus on it and I noticed with their demos they didn't augment the actual space not that it wasn't capable of it but I think that that would have been a telltale sign of lacking that uh volumetric display in the near field exactly because with with this demo if something is in your hand it's in the same focus as everything else as your hand right and that's the magic because I reached into this aquarium that was augmented onto a table and it's the most holographic thing that I think I've ever experienced when it comes to AR because the turtle or fish or whatever it was projection the projection actually was it the same focal depth as my hand and that was just like yes that's that's what I've been looking for and it was dynamic as it moved and your hand moved along with it it would stay on the same focal plane and as your head moved toward it and your eyes were naturally adjusting to your hand and the projection they would still be aligned right and that felt like something magical and essential yeah just sort of a quick caveat we this is all sounding great and magical but it is a very narrow field of view this is something that coming from VR is difficult because we've been spoiled by 110 degree field abuse and what's not in your field of view is is cut off right in VR you only get the virtual field of view with AR with AR they encourage you to have the world behind you and then what's projected is a much smaller portion of your field of view and in this case that thing that's that field of view that box you see in front of you it's larger than a postage stamp that box actually is a shade darker than the world around you you're looking through this visor but you see I almost like a one-step down a tinted review the world which is something they said they would be working on improving right but it helps you to make the objects brighter yeah and it also depends on how bright your environment is and things like that and but frankly I think they made the right choice with this because I would prefer a more opaque augmented image which the darker you know real world actually creates yeah the thing that worries me is that I don't know is this is a product unlike the for their first product lift which is you know like a display you can plug a phone into it and plug a game console into it and you get a stereoscopic portable TV image portable TV here because they're not making tracking devices they're not making software they're not making a headset even they're just working on the object yeah you know I'm a little skeptical when they say that this will be something that's going to be in people's houses in a year or two well they're not I mean my impression was they're not even thinking about people's houses yet like their priority is going to be enterprise they want you know companies to buy this stackable they can design in and demonstrate what their products look like to other people people for whom cost points aren't is important and they can charge a higher price but also yeah I mean they're not any of that text they're relying on third parties to develop the tracking systems my impression is they actually want to be a display manufacturer that then that their volumetric technology is where their focuses and they would love to supply those to other vendors may be you know not unlike valve developed BR and passed it over to HTC right and although if I'm trying to wrap my head around that we've been talking about since we got that demo how does this volumetric display work I still can't figure it out well now they told us that the HDMI signal like the video signal was one two-dimensional image right if you think of this now there are discrete layers right you're going to be focusing on the asteroids in front of you or you it moves further away you know it's maybe some number of discrete layers of focus but that works in their system they won't tell us how many right but they also said that it wasn't sending that many layers of signal exactly from your computer's one layer of signal from the computer which with depth data but with the depth data so you there HDMI signal wouldn't even be visible on a TV because they're actually sneaking information into this HDMI signal that it still makes it compatible with your video card you know in another HDMI receptacles but they snuck it in there and they use that metadata to extrapolate depth data but how does then that get projected onto something that fits in this headset especially when they're using the same type of the the text instrument retinal display and projection has deep records as a glyph now he also said that it wasn't dependent on those as well that this could be done but that job technology they've been set up with and the other thing that kind of blew my mind is that this is supposedly not very expensive optical hardware it is custom but it's not something completely designed and manufactured from the ground up we said we were causing about that price point so for me it's you know how does this light field display what is the snake-oil part of it right because it definitely did work at least in the demo that we saw it's the conversion happens in the headset if that's my understanding cuz it's getting this HDMI signal with this metadata that gets transferred into something volumetric in the headset so they have some sort of smart device in there that does this refraction that creates this multiple this light field image and and I have some I said you know so is my eye receiving just the what's in focus and then at first I thought they must be using eye-tracking or something but no there's no eye tracking at all it's instantaneous so if you look at something else in the in the image your eyes will change focus and that will come into focus and they said that my eye is actually receiving multiple depth of information all the time in the same way that the real world projects multiple depths in the mine into Maya and scaled down right scale gallons of something that still fits in their headset and then through a lens that gets extrapolated to one diopter or half a diopter area away from you most importantly in that newer field yeah that was interesting he said that to a meter apart everything past a meter is relatively the same focal depth yeah but everything close to you that's when you're at your eye the converging part of that focus that we're talking about well and convergence I'm sorry congregation Dacian both of those have to do a lot more work in order to focus on the image yeah and we tried to stress past that demo you know when we were sitting with the virtual solar system in front of us I would just stand there with an asteroid in front of me focusing on the asteroid on the Sun on the after that's the new part on the Sun moving my head not looping my eyes just refocusing and I was able to do it I'm sure once these displays come out somebody will take them apart and figure out how they're doing it but we have thought and thought and thought about how they're doing is and how many layers they must be interpreting and it's it's just beyond me it's a really really fascinating technology yeah it's one of those things we feel like after getting this first taste of light field imaging we'd want to go back it's not ready for a product yet but this real is really exciting the same way using 90 Hertz tracking and displays for VR was a couple years ago we can't wait to see what comes next and we'd love to hear your thoughts on this how they could possibly do it post your comments below subscribe to our channel like this video and Jeremy and I will see you next time\n"