**The Evolution of Hand Tracking Technology**
Hand tracking technology has come a long way since its inception. What was once considered complex and challenging to implement is now becoming increasingly sophisticated. The development of hand tracking systems has been a gradual process, with each new iteration building upon the previous one to create more accurate and seamless experiences.
One of the key milestones in hand tracking technology was the recognition of individual fingers as distinct entities. This breakthrough enabled developers to create second-order interactions, where players could manipulate objects in their hands using multiple finger gestures. For example, a player could use their thumb to pick up an object, while simultaneously manipulating its shape or size with other fingers. Such interactions allowed for a wider range of possibilities and expanded the potential of hand tracking.
The development of hand tracking technology has also been influenced by real-world applications. A developer creating a sponge simulation was able to demonstrate the capabilities of hand tracking, showcasing how players could squeeze and ring out the sponge. This example highlighted the potential of hand tracking in everyday objects and interactions. The success of this experiment paved the way for more complex interactions, such as spray bottles that could be used to apply liquids.
As hand tracking technology improves, it is becoming increasingly necessary to adapt systems to account for its capabilities. In the past, developers had to develop complex algorithms and workarounds to accommodate the limitations of hand tracking. However, with advancements in tracking technology, these systems are becoming less necessary. Instead, developers are able to create more seamless and natural experiences.
The advent of multiple platforms has also presented new challenges and opportunities for hand tracking development. While some platforms may offer slightly different skeletons or input methods, a layer called Alchemy VR provides a unified set of inputs that can be used across all platforms. This means that developers do not have to worry about platform differences when creating their games, allowing them to focus on the overall aesthetic and experience.
The goal of hand tracking technology is to create an immersive experience that feels like it was originally designed for a particular platform. To achieve this, Alchemy VR provides a translation layer that helps bridge the gap between different platforms. This means that even demos can feel cohesive and natural, with developers able to focus on creating engaging gameplay experiences.
As hand tracking technology continues to evolve, we are seeing more refined and nuanced interactions emerge. The addition of small objects, such as dice, has allowed for more precise manipulation and control. In these cases, the hands themselves do not need to be chunky or exaggerated, as the level of precision is no longer necessary.
The development of hand tracking technology is an ongoing process, with each iteration building upon the previous one to create more sophisticated and natural experiences. As we move forward, it will be exciting to see how this technology continues to evolve and shape the gaming industry.
"WEBVTTKind: captionsLanguage: enhey everybody it's Norman from tested and today I'm at the annual game developers conference GDC in San Francisco and I think it's going to be an interesting one because this is the first GDC since the launch of both the meta Quest 3 and the Apple Vision Pro both headsets that emphasize mixed reality hand tracking room mapping and new features that game developers now can tap into and so we're going to get a couple demos going chat with some developers see how they're thinking about using these new features and what the future crop of mixed reality games could look like let's head inside I first met up with developer Thomas van bowel creator of the popular VR puzzle game cubism and the upcoming mix reality game laser dance so cubism is a simple puzzle game about putting blocks into shapes so it's super easy to pick up but it gets very hard the further you go as the puzzles become more hard harder um it started as a VR game uh but then as the Quest 2 in through software updates got a lot of new capabilities like hand tracking a pass through keeps them sort of involved with it um and I now primarily really see it as a mixed reality game and um since one of my main goals with the game was to make it as easy and accessible as possible like a great introduction for people who never played VR or who maybe never play games um mix reality and hat tracking are really exciting step to make that barrier a bit lower it doesn't necessarily make the game easier but the there's still a barrier for some people to put on a headset and stay in there for a long time and mix LD just just lowers that barrier a bit because they don't get disconnected with their environments or the people around them and especially for someone like I've demoed cubis of a bunch of times uh during development and after as well um I've seen it be a lot easier with mixity actually because putting on a headset to someone them seeing you and then you explaining the game and helping them where they need it uh is just so much easier in mixed reality and so much more natural and that's still for a lot of people their first experience in the medium is somebody showing them right so trying to make a game that's easy to show to someone uh has been easier with this technology Improvement it's a game that feels so intuitive to to learn a puzzle game fitting shapes into 3D shapes but takes advantage of what VR and mix reality has to offer in terms of the spatial understanding of those shapes yes you're encouraging players to to look around and manipulate you know the shapes themselves uh and I I feel like having the mixed reality aspect to it makes that spatial aspect feel even more powerful absolutely absolutely and I'm still hoping to be able to keep improving it as it goes on I think um Quest 3 introduced a lot of elements that uh make it more comfortable and more interesting in mixity just color pass through by defaults uh but also a scene understanding uh which is a minor component in cubism you can actually like uh you know um mark off your table and then place the puzzle on the table for example but one key component that's still missing is persistence right like uh to a certain extent for me the final form of cubism in mixed reality is it being sort of like a Rubik Cube that sits on your desk next to your work where you can like pick it up quickly do a puzzle and take a break from your work and then get back to work um and for that you want sort of to leave the puzzle where it is when you're done and then come back to it a kind of multitasking way for full spatial Computing call absolutely why not but and hand tracking something you also implemented um it works great with controllers I'm curious because I actually prefer playing with controllers I like being able to manipulate with the pinpoint accuracy and also using a mixed M using my thumbs to to rotate the p as well for sure yeah what's been your feedback how do you think about hand tracking versus controllers for a game like Cubone yeah absolutely on uh Quest actually I have like uh very accurate statistics on how many people actually use hand tracking it's about like one in five a bit over that um and so it's true like with controllers you will always have more accuracy and have a little bit more control over placing uh pieces which helps here but I think for me personally the barrier is a little a little smaller to just like pick up and play something when hand tracking actually as a developer myself I used to at the start of hand tracking still like pick up controllers to test the game if I had to like QA a new release or something like this these days I actually just do have been hand tracking cuz it's for me it's easier and there's less steps um to sort of go to the game and so for some people that is also the case it seems that they feel it's more comfortable to pick up and do like short games with with just hand tracking but experiences vary of course for some people controllers will always be more more comfortable well I love that you're supporting both abely you support both and you let Maybe Advanced users who you know are are very familiar with controllers and want that precision and immediate responsiveness to take advantage of that absolutely and for for a lot of people it's a slow game in a sense like you're doing a lot of thinking and not necessarily a lot of moving so for a lot of people that also helps like you're only rarely putting a piece or moving things around if you're really like stuck on a puzzle so that slowness of the game sort of also works pretty well with the hand dring for a lot of users is there any idea of maybe introducing more complex gestures right now it's very intuitive you're you know kind of pinching and grabbing physically manipulating but where do you see do you see hand tracking able to abstract different movements or using it more like a controller I guess right yeah well I think you lose a bit of the magic of hand tracking there um like the difficulty that a lot of people who don't play games uh have with controllers is that they have to learn this abstraction of interaction right for us who who do have like a history of playing games you pick up a controller even something like a a quest controller which is very different from an Xbox controller and still understand what the joystick is is what a trigger is we pick up very quickly the abstraction to do interactions the benefit of hand tracking uh I think is being able to not have to learn any abstractions um you can just grab a thing by grabbing it you can push a button by pushing it you can introduce abstractions you know actually the whole interaction system for Quest is like pinching uh with the system it's a thing that you have to teach and learn so it becomes a barrier to uh to actually interacting with the platform so that's something I didn't want really in uh cubism uh there are some like um some power user uh gestures so you can actually look at a piece and then go like this and sort of call the piece to you right yeah you know to make sure that everything is sort of in reach always um and also because people uh learn to pinch in the system menu from play test I saw some people launch the game and try to pinch the firster button so you can actually do this if you hover over a button and pinch it will also accept it even though it's sort of designed for uh pushing so again like supporting different ways of uh doing inputs there helps a lot I'm sure from a user experience as I point you have to find ways to introduce those and and and introduce gestures and more more complicated ways exactly well those are partially there to to be discovered at least those uh the calling gestures for pushing because it's more natural people pick it up more clearly but also the game in on Quest starts with this model to say like hey you should have your lights on and you like grab things with your index in your T the first button to accept that model is saying push to start there was actually something that changed through play testing where explicitly saying push just uh like primes people to yeah you push buttons in this game and then they know to do that for the rest of the game so you teach that sort of uh in a hidden way uh people will look it up it's amazing that this many years into VR yourself to tell people treat it like it's a real thing right like like we're is only a problem if they learn to not treat it like a real thing in the system so it's it's an interesting U debate I think like do you use abstractions where they are maybe a bit more versatile or do you use things that are less precise but more natural and easier to pick up that's uh it's interesting I think you mentioned you're now taking advantage of some of the the more Scene understanding or the more the more uh the spatial understanding of the room that the quest three allows and that's something that you're really tapping into with laser dance so tell me about that absolutely so um laser dance is in cont to cubism is built from the ground up as a mixed reality game the whole reason I'm uh I started building mixed laser dance is that I wanted to try to figure out a sort of game that could only work in mix reality um very quickly like laser dance the idea is that you turn any room of your house into a or obstacle course you place uh two virtual buttons on opposite walls and then you go back and forth pressing a button and every time you press the button a new pattern of lasers uh will sort of spawn in your room where you have to move your whole body around it yeah um the patterns are sort of adaptive uh so they try to adapt to the room that you're in um and uh yeah you really play with your whole body um the the game is currently working with upper body tracking um inside up body tracking Quest three so it will actually know where your your upper body is and where your spine is and where your head is and so you really have to sort of move with your entire body it's a very physical game it's very fun to watch people play CU they look super silly uh sort of dancing around their room um but yeah so it's really from the ground up made from mix reality and that was sort of my whole goal with it like um for any VR game you can sort of ask like does it need to be in VR or not the answer might be no and it's still a really fun game it's just more immersive but when the answer is yes there's usually something special about it right like a beat saver built all around like your motion that's enabled by the the headset that's something really interesting and I think for the same uh for mixed reality you can ask the same question like does a VR game need to be in mixed reality the answer might be no like in cubism but it still adds a lot because it adds immersion and it makes it more accessible but I was trying to figure out what's a game where the answer is yes and hope laser dance is is an example of that and it really takes advantage of the tools now you're given now the way I understand it in the quest three allows you to tap into what the user has designed as a mesh you know generated by the cameras right the user will look around and you get this geometry that's built out or they can customize and say you know design my chair is here my sofa is here my walls are here how does your system tap into either of those SE the seat understanding of what the room is so it actually works with both uh the game was first designed around the initial uh system of uh you know drawing boxes around your furniture and manually setting up the room but that's a system that's also very error prone uh on the user side maybe you missed a key piece of furniture maybe you didn't like set up the walls correctly and that can result into faulty gameplay or gameplay where you can't really progress to the level uh because the game is asking you to like move through a piece of furniture or a wall you mark right and so mesh hopefully in the long run will become a more sustainable and easier to set up and a more versatile way of like um setting up your room like this um so having the mesh for example my my apartment in Brussels where I live uh is under a roof so I have these like slanted ceilings yes and so that's something that you would never be able to capture with the old system that always abstract everything into a box cubes EXA exactly and also the walls are always like a straight up a prism let's sayic yes exactly so now I can actually capture that with the mesh and try to anticipate that with how I spawn the laser patterns uh and so it becomes a lot more natural and much more resilient to different types of rooms like this but the game supports quot for this one and the game you're designing because you're generating these two end points and lasers or just these raycast you know beams yeah it doesn't matter if that mesh it can be a little noisy it can be uneven but it still works for the game as long as the the mesh uh is detailed enough in the places that matter uh that is like that'll work that's actually a challenge I have right now like just a mesh API uh and sort of working around making game around mesh is very new at this point uh so there's still a lot of back and forth between uh you know developers working with this and platforms as well so what is needed one of the issues right now is that actually uh the mesh where you walked is can be very high detailed but part of the mesh that's generated on Quest 3 might not be high detail right and so you might get situations where um maybe you scanned a good part of your apartments but you missed the part and it's sort of filled in as like a blank spot yeah we only know what data we get from the mesh and sort of try to make good estimates on this but for example the the game has custom B finding uh there might be like a level that's like a tunnel of lasers that you have to go through that tries to go around your furniture but if there's an empty spot that's actually a wall it might be blocking so that's so something that needs to be figured out in like the API is like how do you get the information to developers to be able to handle those cases right yeah cuz you you have to maybe work incomplete data in a variety of infinite combinations of spaces while still you know broadly saying exactly this is the minimum spec for for a room exactly and the US still have with the old system of manually setting up Furniture boxes you have some uh degree of control to like fix issues if they happen like if your mesh capture missed the spot you can actually just put a box where a wall needs to be or where a piece of furniture needs to be uh so that way you can sort of fix some of those issues currently but down the uh down the line like hopefully mesh apis and things like that will become more resilient for these sort of things and provide the right amount of data to developers to be able to handle all the G cases what's the on the extremes what's like the largest environment or the smallest or most complex environments that this will work in um so right now I try to design for the lowest common denominator the minimum requirement is uh to have 3 m as the minimum distance between the the buttons so about 10 ft I think um it doesn't have to be the direct distance just the smallest walking uh that like path finding would find um actually when you're setting up the buttons the first button you place in the wall while you're setting up the second button you'll see a sort of meter that tells you exactly the distance and like the path it's finding in your space so you can directly see what uh what that distance is um that's the minimum and then the game will work that's like enough to sort of Spawn interesting levels in between um I've tried to I worked from like a co-working space which is like a big open Office so I've tried to like scan part of that that's part where like these uh detail issues come up in meshing because like there's an upper limits where if you're scanning and scanning and scanning at a certain point the quest three will just stop um what exactly that limit is I think that still needs to be determined but there it will have like a high detail parts and it will fill in the gaps uh so in in the office for example uh it had a high detail part where it captured all the desks and all the chairs and all that stuff then had just like an empty spot where other desks were but where the captur just sort of stop so that sort of those cases still need to be handled uh through improvements those apis well the smart brilliant thing you've done is that you've designed a game where you want the player to work with the system cuz they're they're only going to get the best experience when they try not to break the system when they when they give you as much information when they play in the most kind of fun and optimal space exactly because you're working together with the player to create a fun experience if they try to break it they're not going to have fun and the best place really is your living room or like your bedroom or like any space in your house which will have a limit to how big it is probably depends you live of um and like the fun thing about mixity is I mean you remember the old days of V when AC came out and people were very excited about like room scale VR and um maybe you remember unseen diplomacy also had laser elements to it but you needed 2 by 3 meters and so the limiting factor was people having enough space being able to push their furniture around one of the cool things about mix elies you can actually make those part of the game play right I uh mentioned like path finding that the game uses so if you have furniture in your house the game will just try to find a path around it and make the game safe and playable in that way and so that's fun to be able to incorporate those things uh in gamepl itself one of the first uh places I I tested a very early prototype was also a coworking space and we tested it in a meeting room that had like a big table underneath and so lasers would hit the table right people would like crawl underneath the table for like cover so it's like fun to be able to incorporate all this elements in the game as so it's an exercise game disguised as a yoga game almost love it I love it and now you mentioned unseen diplomacy and all the nonukan geometry I'm sure there are so many ideas that you can tap into you guys are abely I mean you as a single developer really are going forward using all these features it's awesome appreciate yeah well it's great to see you in person Thomas congratulations on showing it here and can't wait for it to come out that's awesome thanks so much for talking thank you Thomas cheers what struck me most during my play test of the laser dance demo was how much the game benefits from the play Space being my actual home the dynamic levels use of room mapping real time occlusion and sense of embodiment all helped blend the game play with a space I already have a strong connection with that same ethos is shared by another mixed reality game I previewed Starship home the first Quest 3 exclusive announced and coming out later this year developer Mark Shram gave me a demo of this cozy Adventure game that like laser dance emphasizes the resident power of playing in your own personal physical space other developers at GDC were showing how they were integrating mixed reality and hand trkking into existing experiences like resolution games Angry Birds VR or getting in on the ground floor of Apple Vision Pro games like Beyond Studios mixed reality endless game runaways and because the quest 3 and apple Vision Pro have different implementations of mixed reality and hand tracking I was also curious how developers working on both platforms reconcile those differences this led me to a conversation about hand tracking with Alchemy Labs the makers of job simulator and vacation simulator who ran me through their own hand tracking implementation that they build on top of meta and Apple's technology so we build on top of meta's base right meta and we build on top of everyone's hand tracking base um but we build our own interaction layer right so yeah we originally experimented back in vacation simulator that you can play it right now on Quest and that was really like how do you take a controller and make it work right it was very much like substituting the one button and then the teleport um but this uh and the CA the cafe demo the demo that you play today is very is much more like what happens when we design for hand tracking first right and a lot of the rules that we had made with controllers don't apply so for instance when we were building we used to be like only objects that are like the size of a baseball or larger right and now actually little objects are really fun because you have individual fingers that you can pick things up so we what our Tech does is we do a lot of kind of like um we just make it work is the best way describe it so you might not notice but your fingers are like stretching and pulling and they're moving around and we're doing a lot of behind the scenes work to make what your brain thinks should be happening match the action in the headset way of interacting and it seemed like for a while we were just getting a hang of just being able to grasp and point the basic first order style of interactions you guys are thinking a little bit past that can you talk about what happens what the how complex I guess you can get hand tracking to yeah so once once you have all your individual finger so you could do these second order interactions so you can have like a spray bottle that's that actually sprays or you can have a button that you click with your thumb so you're using your hand to pick something up and then doing this kind of second gesture while it's in your hand we do squeeze bottles so it's like you pick it up lightly and then you squeeze it we have an egg that you can hold and crush and those so anytime there's like something you can do where it's beyond the grasp grasp State it's really fun to like be like oh I got this extra thing I'm like holding something and using my finger fingers to manipulate it and your system recognizes when players achieve that first level whether it's grabbing a cylindrical object or or whatever your your menu of first order interactions and on top of that now you can then do more dextrous things with your thumbs or index fingers yeah yeah and it actually came from a developer making a sponge right he was like oh we should be able to squeeze and ring the sponge out and then we were like he posted a video in our in our slack and we were like what no way and he did a spray bottle and then it all fell out from there so yeah it's uh yeah it's definitely something where you know we're learning it we're in the same state with hand dragging that we were when in like 2015 to kind of go back to things 2015 with HTC Vive when we were working on the Vive pre right the controllers would like actually we had the pre pre we had like the Mr hats and the controllers would just like launch off into the distance or they used to electrocute you a little bit or you know they would have all sorts of crazy interactions and we had all these systems to account for it and what's happened is over time we've like taken those systems out because the tracking's gotten better and the same thing is going to happen with hand tracking right all these things we're doing to account and kind of like make things work as hand tracking improves we're going to have to do less and it's just going to be better today you're using the worst version of hand tracking you will use because tomorrow will think about it yeah yeah and there's also multiple platforms now you know you mentioned you guys are developing for for Vision Pro as well what Apple May provide you I assum is going to be slightly different than what you might get from Quest fundamentals like the the skeleton but you know how how do you think about those two and are are do you have to create a baseline of the shared inputs that you're going to get from both we're actually very good at building we have this layer called Alchemy VR and so we have a number of people who are just really good at taking kind of the raw data that we get or the massage data and like turning it into a unified set so if you're like a developer at Alchemy you actually don't have to think about the platform differences we have a few people who do and make it all feel the same our goal when anytime we bring our game somewhere is to make it feel as if we originally designed it for that platform and so we have a lot of work that we put in to kind of do that so you have this translation layer that helps us the the Aesthetics of the games that you're you're making at least you know even the demo I've seen kind of still inherit some of the philosophies like the the larger hands the bigger dials I assume some of those are because of original constraints we back had back in in VR make things easier larger visual what point do those become morein to mirror what we aesthetically see in the real world I think you're going to see as things progress that that the hands are going to start matching the hands you're never going to see a perfect representation of your hands except for on the vision with the cutout because uh if you try to like match people's real hands you get like the weird like have you done experimentations oh yeah yeah it's just like weird veiny hands and you're like this is terrible like I so we're you know algam is always gonna make cartoony approximations but yeah I think that you know in this particular demo you were playing something that had to have those because those are like the features of job simulator but um I think as we go on we're finding that like you can manipulate like dice you can manipulate all sorts of little objects and therefore the hands don't have to be as like chunky right the chunky hands helped communicate something which was you really can't do fine manipulation and we don't need that anymorehey everybody it's Norman from tested and today I'm at the annual game developers conference GDC in San Francisco and I think it's going to be an interesting one because this is the first GDC since the launch of both the meta Quest 3 and the Apple Vision Pro both headsets that emphasize mixed reality hand tracking room mapping and new features that game developers now can tap into and so we're going to get a couple demos going chat with some developers see how they're thinking about using these new features and what the future crop of mixed reality games could look like let's head inside I first met up with developer Thomas van bowel creator of the popular VR puzzle game cubism and the upcoming mix reality game laser dance so cubism is a simple puzzle game about putting blocks into shapes so it's super easy to pick up but it gets very hard the further you go as the puzzles become more hard harder um it started as a VR game uh but then as the Quest 2 in through software updates got a lot of new capabilities like hand tracking a pass through keeps them sort of involved with it um and I now primarily really see it as a mixed reality game and um since one of my main goals with the game was to make it as easy and accessible as possible like a great introduction for people who never played VR or who maybe never play games um mix reality and hat tracking are really exciting step to make that barrier a bit lower it doesn't necessarily make the game easier but the there's still a barrier for some people to put on a headset and stay in there for a long time and mix LD just just lowers that barrier a bit because they don't get disconnected with their environments or the people around them and especially for someone like I've demoed cubis of a bunch of times uh during development and after as well um I've seen it be a lot easier with mixity actually because putting on a headset to someone them seeing you and then you explaining the game and helping them where they need it uh is just so much easier in mixed reality and so much more natural and that's still for a lot of people their first experience in the medium is somebody showing them right so trying to make a game that's easy to show to someone uh has been easier with this technology Improvement it's a game that feels so intuitive to to learn a puzzle game fitting shapes into 3D shapes but takes advantage of what VR and mix reality has to offer in terms of the spatial understanding of those shapes yes you're encouraging players to to look around and manipulate you know the shapes themselves uh and I I feel like having the mixed reality aspect to it makes that spatial aspect feel even more powerful absolutely absolutely and I'm still hoping to be able to keep improving it as it goes on I think um Quest 3 introduced a lot of elements that uh make it more comfortable and more interesting in mixity just color pass through by defaults uh but also a scene understanding uh which is a minor component in cubism you can actually like uh you know um mark off your table and then place the puzzle on the table for example but one key component that's still missing is persistence right like uh to a certain extent for me the final form of cubism in mixed reality is it being sort of like a Rubik Cube that sits on your desk next to your work where you can like pick it up quickly do a puzzle and take a break from your work and then get back to work um and for that you want sort of to leave the puzzle where it is when you're done and then come back to it a kind of multitasking way for full spatial Computing call absolutely why not but and hand tracking something you also implemented um it works great with controllers I'm curious because I actually prefer playing with controllers I like being able to manipulate with the pinpoint accuracy and also using a mixed M using my thumbs to to rotate the p as well for sure yeah what's been your feedback how do you think about hand tracking versus controllers for a game like Cubone yeah absolutely on uh Quest actually I have like uh very accurate statistics on how many people actually use hand tracking it's about like one in five a bit over that um and so it's true like with controllers you will always have more accuracy and have a little bit more control over placing uh pieces which helps here but I think for me personally the barrier is a little a little smaller to just like pick up and play something when hand tracking actually as a developer myself I used to at the start of hand tracking still like pick up controllers to test the game if I had to like QA a new release or something like this these days I actually just do have been hand tracking cuz it's for me it's easier and there's less steps um to sort of go to the game and so for some people that is also the case it seems that they feel it's more comfortable to pick up and do like short games with with just hand tracking but experiences vary of course for some people controllers will always be more more comfortable well I love that you're supporting both abely you support both and you let Maybe Advanced users who you know are are very familiar with controllers and want that precision and immediate responsiveness to take advantage of that absolutely and for for a lot of people it's a slow game in a sense like you're doing a lot of thinking and not necessarily a lot of moving so for a lot of people that also helps like you're only rarely putting a piece or moving things around if you're really like stuck on a puzzle so that slowness of the game sort of also works pretty well with the hand dring for a lot of users is there any idea of maybe introducing more complex gestures right now it's very intuitive you're you know kind of pinching and grabbing physically manipulating but where do you see do you see hand tracking able to abstract different movements or using it more like a controller I guess right yeah well I think you lose a bit of the magic of hand tracking there um like the difficulty that a lot of people who don't play games uh have with controllers is that they have to learn this abstraction of interaction right for us who who do have like a history of playing games you pick up a controller even something like a a quest controller which is very different from an Xbox controller and still understand what the joystick is is what a trigger is we pick up very quickly the abstraction to do interactions the benefit of hand tracking uh I think is being able to not have to learn any abstractions um you can just grab a thing by grabbing it you can push a button by pushing it you can introduce abstractions you know actually the whole interaction system for Quest is like pinching uh with the system it's a thing that you have to teach and learn so it becomes a barrier to uh to actually interacting with the platform so that's something I didn't want really in uh cubism uh there are some like um some power user uh gestures so you can actually look at a piece and then go like this and sort of call the piece to you right yeah you know to make sure that everything is sort of in reach always um and also because people uh learn to pinch in the system menu from play test I saw some people launch the game and try to pinch the firster button so you can actually do this if you hover over a button and pinch it will also accept it even though it's sort of designed for uh pushing so again like supporting different ways of uh doing inputs there helps a lot I'm sure from a user experience as I point you have to find ways to introduce those and and and introduce gestures and more more complicated ways exactly well those are partially there to to be discovered at least those uh the calling gestures for pushing because it's more natural people pick it up more clearly but also the game in on Quest starts with this model to say like hey you should have your lights on and you like grab things with your index in your T the first button to accept that model is saying push to start there was actually something that changed through play testing where explicitly saying push just uh like primes people to yeah you push buttons in this game and then they know to do that for the rest of the game so you teach that sort of uh in a hidden way uh people will look it up it's amazing that this many years into VR yourself to tell people treat it like it's a real thing right like like we're is only a problem if they learn to not treat it like a real thing in the system so it's it's an interesting U debate I think like do you use abstractions where they are maybe a bit more versatile or do you use things that are less precise but more natural and easier to pick up that's uh it's interesting I think you mentioned you're now taking advantage of some of the the more Scene understanding or the more the more uh the spatial understanding of the room that the quest three allows and that's something that you're really tapping into with laser dance so tell me about that absolutely so um laser dance is in cont to cubism is built from the ground up as a mixed reality game the whole reason I'm uh I started building mixed laser dance is that I wanted to try to figure out a sort of game that could only work in mix reality um very quickly like laser dance the idea is that you turn any room of your house into a or obstacle course you place uh two virtual buttons on opposite walls and then you go back and forth pressing a button and every time you press the button a new pattern of lasers uh will sort of spawn in your room where you have to move your whole body around it yeah um the patterns are sort of adaptive uh so they try to adapt to the room that you're in um and uh yeah you really play with your whole body um the the game is currently working with upper body tracking um inside up body tracking Quest three so it will actually know where your your upper body is and where your spine is and where your head is and so you really have to sort of move with your entire body it's a very physical game it's very fun to watch people play CU they look super silly uh sort of dancing around their room um but yeah so it's really from the ground up made from mix reality and that was sort of my whole goal with it like um for any VR game you can sort of ask like does it need to be in VR or not the answer might be no and it's still a really fun game it's just more immersive but when the answer is yes there's usually something special about it right like a beat saver built all around like your motion that's enabled by the the headset that's something really interesting and I think for the same uh for mixed reality you can ask the same question like does a VR game need to be in mixed reality the answer might be no like in cubism but it still adds a lot because it adds immersion and it makes it more accessible but I was trying to figure out what's a game where the answer is yes and hope laser dance is is an example of that and it really takes advantage of the tools now you're given now the way I understand it in the quest three allows you to tap into what the user has designed as a mesh you know generated by the cameras right the user will look around and you get this geometry that's built out or they can customize and say you know design my chair is here my sofa is here my walls are here how does your system tap into either of those SE the seat understanding of what the room is so it actually works with both uh the game was first designed around the initial uh system of uh you know drawing boxes around your furniture and manually setting up the room but that's a system that's also very error prone uh on the user side maybe you missed a key piece of furniture maybe you didn't like set up the walls correctly and that can result into faulty gameplay or gameplay where you can't really progress to the level uh because the game is asking you to like move through a piece of furniture or a wall you mark right and so mesh hopefully in the long run will become a more sustainable and easier to set up and a more versatile way of like um setting up your room like this um so having the mesh for example my my apartment in Brussels where I live uh is under a roof so I have these like slanted ceilings yes and so that's something that you would never be able to capture with the old system that always abstract everything into a box cubes EXA exactly and also the walls are always like a straight up a prism let's sayic yes exactly so now I can actually capture that with the mesh and try to anticipate that with how I spawn the laser patterns uh and so it becomes a lot more natural and much more resilient to different types of rooms like this but the game supports quot for this one and the game you're designing because you're generating these two end points and lasers or just these raycast you know beams yeah it doesn't matter if that mesh it can be a little noisy it can be uneven but it still works for the game as long as the the mesh uh is detailed enough in the places that matter uh that is like that'll work that's actually a challenge I have right now like just a mesh API uh and sort of working around making game around mesh is very new at this point uh so there's still a lot of back and forth between uh you know developers working with this and platforms as well so what is needed one of the issues right now is that actually uh the mesh where you walked is can be very high detailed but part of the mesh that's generated on Quest 3 might not be high detail right and so you might get situations where um maybe you scanned a good part of your apartments but you missed the part and it's sort of filled in as like a blank spot yeah we only know what data we get from the mesh and sort of try to make good estimates on this but for example the the game has custom B finding uh there might be like a level that's like a tunnel of lasers that you have to go through that tries to go around your furniture but if there's an empty spot that's actually a wall it might be blocking so that's so something that needs to be figured out in like the API is like how do you get the information to developers to be able to handle those cases right yeah cuz you you have to maybe work incomplete data in a variety of infinite combinations of spaces while still you know broadly saying exactly this is the minimum spec for for a room exactly and the US still have with the old system of manually setting up Furniture boxes you have some uh degree of control to like fix issues if they happen like if your mesh capture missed the spot you can actually just put a box where a wall needs to be or where a piece of furniture needs to be uh so that way you can sort of fix some of those issues currently but down the uh down the line like hopefully mesh apis and things like that will become more resilient for these sort of things and provide the right amount of data to developers to be able to handle all the G cases what's the on the extremes what's like the largest environment or the smallest or most complex environments that this will work in um so right now I try to design for the lowest common denominator the minimum requirement is uh to have 3 m as the minimum distance between the the buttons so about 10 ft I think um it doesn't have to be the direct distance just the smallest walking uh that like path finding would find um actually when you're setting up the buttons the first button you place in the wall while you're setting up the second button you'll see a sort of meter that tells you exactly the distance and like the path it's finding in your space so you can directly see what uh what that distance is um that's the minimum and then the game will work that's like enough to sort of Spawn interesting levels in between um I've tried to I worked from like a co-working space which is like a big open Office so I've tried to like scan part of that that's part where like these uh detail issues come up in meshing because like there's an upper limits where if you're scanning and scanning and scanning at a certain point the quest three will just stop um what exactly that limit is I think that still needs to be determined but there it will have like a high detail parts and it will fill in the gaps uh so in in the office for example uh it had a high detail part where it captured all the desks and all the chairs and all that stuff then had just like an empty spot where other desks were but where the captur just sort of stop so that sort of those cases still need to be handled uh through improvements those apis well the smart brilliant thing you've done is that you've designed a game where you want the player to work with the system cuz they're they're only going to get the best experience when they try not to break the system when they when they give you as much information when they play in the most kind of fun and optimal space exactly because you're working together with the player to create a fun experience if they try to break it they're not going to have fun and the best place really is your living room or like your bedroom or like any space in your house which will have a limit to how big it is probably depends you live of um and like the fun thing about mixity is I mean you remember the old days of V when AC came out and people were very excited about like room scale VR and um maybe you remember unseen diplomacy also had laser elements to it but you needed 2 by 3 meters and so the limiting factor was people having enough space being able to push their furniture around one of the cool things about mix elies you can actually make those part of the game play right I uh mentioned like path finding that the game uses so if you have furniture in your house the game will just try to find a path around it and make the game safe and playable in that way and so that's fun to be able to incorporate those things uh in gamepl itself one of the first uh places I I tested a very early prototype was also a coworking space and we tested it in a meeting room that had like a big table underneath and so lasers would hit the table right people would like crawl underneath the table for like cover so it's like fun to be able to incorporate all this elements in the game as so it's an exercise game disguised as a yoga game almost love it I love it and now you mentioned unseen diplomacy and all the nonukan geometry I'm sure there are so many ideas that you can tap into you guys are abely I mean you as a single developer really are going forward using all these features it's awesome appreciate yeah well it's great to see you in person Thomas congratulations on showing it here and can't wait for it to come out that's awesome thanks so much for talking thank you Thomas cheers what struck me most during my play test of the laser dance demo was how much the game benefits from the play Space being my actual home the dynamic levels use of room mapping real time occlusion and sense of embodiment all helped blend the game play with a space I already have a strong connection with that same ethos is shared by another mixed reality game I previewed Starship home the first Quest 3 exclusive announced and coming out later this year developer Mark Shram gave me a demo of this cozy Adventure game that like laser dance emphasizes the resident power of playing in your own personal physical space other developers at GDC were showing how they were integrating mixed reality and hand trkking into existing experiences like resolution games Angry Birds VR or getting in on the ground floor of Apple Vision Pro games like Beyond Studios mixed reality endless game runaways and because the quest 3 and apple Vision Pro have different implementations of mixed reality and hand tracking I was also curious how developers working on both platforms reconcile those differences this led me to a conversation about hand tracking with Alchemy Labs the makers of job simulator and vacation simulator who ran me through their own hand tracking implementation that they build on top of meta and Apple's technology so we build on top of meta's base right meta and we build on top of everyone's hand tracking base um but we build our own interaction layer right so yeah we originally experimented back in vacation simulator that you can play it right now on Quest and that was really like how do you take a controller and make it work right it was very much like substituting the one button and then the teleport um but this uh and the CA the cafe demo the demo that you play today is very is much more like what happens when we design for hand tracking first right and a lot of the rules that we had made with controllers don't apply so for instance when we were building we used to be like only objects that are like the size of a baseball or larger right and now actually little objects are really fun because you have individual fingers that you can pick things up so we what our Tech does is we do a lot of kind of like um we just make it work is the best way describe it so you might not notice but your fingers are like stretching and pulling and they're moving around and we're doing a lot of behind the scenes work to make what your brain thinks should be happening match the action in the headset way of interacting and it seemed like for a while we were just getting a hang of just being able to grasp and point the basic first order style of interactions you guys are thinking a little bit past that can you talk about what happens what the how complex I guess you can get hand tracking to yeah so once once you have all your individual finger so you could do these second order interactions so you can have like a spray bottle that's that actually sprays or you can have a button that you click with your thumb so you're using your hand to pick something up and then doing this kind of second gesture while it's in your hand we do squeeze bottles so it's like you pick it up lightly and then you squeeze it we have an egg that you can hold and crush and those so anytime there's like something you can do where it's beyond the grasp grasp State it's really fun to like be like oh I got this extra thing I'm like holding something and using my finger fingers to manipulate it and your system recognizes when players achieve that first level whether it's grabbing a cylindrical object or or whatever your your menu of first order interactions and on top of that now you can then do more dextrous things with your thumbs or index fingers yeah yeah and it actually came from a developer making a sponge right he was like oh we should be able to squeeze and ring the sponge out and then we were like he posted a video in our in our slack and we were like what no way and he did a spray bottle and then it all fell out from there so yeah it's uh yeah it's definitely something where you know we're learning it we're in the same state with hand dragging that we were when in like 2015 to kind of go back to things 2015 with HTC Vive when we were working on the Vive pre right the controllers would like actually we had the pre pre we had like the Mr hats and the controllers would just like launch off into the distance or they used to electrocute you a little bit or you know they would have all sorts of crazy interactions and we had all these systems to account for it and what's happened is over time we've like taken those systems out because the tracking's gotten better and the same thing is going to happen with hand tracking right all these things we're doing to account and kind of like make things work as hand tracking improves we're going to have to do less and it's just going to be better today you're using the worst version of hand tracking you will use because tomorrow will think about it yeah yeah and there's also multiple platforms now you know you mentioned you guys are developing for for Vision Pro as well what Apple May provide you I assum is going to be slightly different than what you might get from Quest fundamentals like the the skeleton but you know how how do you think about those two and are are do you have to create a baseline of the shared inputs that you're going to get from both we're actually very good at building we have this layer called Alchemy VR and so we have a number of people who are just really good at taking kind of the raw data that we get or the massage data and like turning it into a unified set so if you're like a developer at Alchemy you actually don't have to think about the platform differences we have a few people who do and make it all feel the same our goal when anytime we bring our game somewhere is to make it feel as if we originally designed it for that platform and so we have a lot of work that we put in to kind of do that so you have this translation layer that helps us the the Aesthetics of the games that you're you're making at least you know even the demo I've seen kind of still inherit some of the philosophies like the the larger hands the bigger dials I assume some of those are because of original constraints we back had back in in VR make things easier larger visual what point do those become morein to mirror what we aesthetically see in the real world I think you're going to see as things progress that that the hands are going to start matching the hands you're never going to see a perfect representation of your hands except for on the vision with the cutout because uh if you try to like match people's real hands you get like the weird like have you done experimentations oh yeah yeah it's just like weird veiny hands and you're like this is terrible like I so we're you know algam is always gonna make cartoony approximations but yeah I think that you know in this particular demo you were playing something that had to have those because those are like the features of job simulator but um I think as we go on we're finding that like you can manipulate like dice you can manipulate all sorts of little objects and therefore the hands don't have to be as like chunky right the chunky hands helped communicate something which was you really can't do fine manipulation and we don't need that anymore\n"