The Art of Warhammer Fork Miniatures: A Display Board with Augmented Reality Capabilities
What we have here is a display board that features some handbuilt and hand painted Warhammer Fork Miniatures that people use them during gameplay and they also tell various stories with them because they all have characters and stuff like that. What we have G done is use ored reality to enable people to show Beyond just what you see here the deeper layers of meaning that might be in there. So, how these things were built, what the characters are, what story this whole diaram is trying to tell. So, what we have here is an augmented reality application.
After viewing the diagram, if you want to find out more about say for example this character, can just tap on him read about the character read what he's done in various games. This is dynamically collected data from gameplay, how he's done most memorable moment for example it is augmented reality so basically the content is tied to the the virtual content is tied to the real world. You can come in from any angle view it from anywhere, we've got mult tablets so we're going to do it at the same time. Where does that start? Is it recognizing the images what's going on inde?
Augmented reality I is all about tying virtual content to the real world. Now, there are a few ways to do this and bearing bearing any uh specialist equipment what you can do is use Optical techniques. So, in this case we're just using the camera nothing else. What it does is it recognizes uh basically a two-dimensional image Target in this case the stones in front. This is basically just a square image of stones that we've taught the app to recognize and based on that in relation to that uh image we've placed the virtual content.
As you can see when we get closer to the individual cards they become more opaque and easier to read so it's a proximity thing. So, basically we've told the system that compared to say above that guy's head is a piece of information that we can tap on and everything is in relation to that image. This of course has some limitations but as technology gets better the same technology can instead of having to recognize a two-dimensional image can recognize the objects themselves assuming you've taught it what they look like.
The way it works is while it is looking for a specific Target you don't need to stay locked on to that Target while you're viewing the content. This would be the default screen before now it has acquired a lock and see here we are no longer looking at the image Target which is what uh everything's tied to but because it is using optical flow we can go off the Target and find something in fact we can take the content away basically and read it at our leisure. And when we're done it will inform us that we're off Target and you might have seen the card flew back to where the diarama was and we just go back reacquire get what we want.
So, let's say it's really busy in here and uh we just decide we want to go and sit down on the sofa and keep reading is that doable yes it is. However, you will have to return to pick up the content of somebody else when you're done of course there is an offline reading capability which is basically just like reading a book but there we go. We can grab this and assuming we unlocked this this tablet you can security is it indeed it's one of those requirements and there we go and it'll just fly back to where it came from.
Ideally, you'd be able to tie virtual content to anything in real time so the obviously the Sci-Fi applications would be I'm interested in something and I can just have the information pop up above it and I see it on my sci-fi Google Glasses or whatever it might be. The trick is having technology that can do that on the Fly we're getting there it's all based on uh game engine technology so uh Unity engine in this case then voria such which is a augmented reality engine those play very nicely together.
And then, it was a simple matter of putting together the physical component and uh all the content that was going to be there tying it up and putting it on display quite a few images. So, on average we take about 100 images per object again whatever the camera doesn't see it cannot make a model of so we try to make sure we get all the hidden parts of the model then we throw that into our software and we align the images so each image needs
"WEBVTTKind: captionsLanguage: enwhat we have here is a display board that features some handbuilt and hand painted Warhammer Fork Miniatures that people use them during gameplay and they also tell various stories with them because they all have characters and stuff like that what we have G done is use ored reality to enable people to show Beyond just what you see here the deeper layers of meaning that might be in there so how these things were built what the characters are what story this whole diaram is trying to tell so what we have here is an augmented reality application after you viewed the diagam if you want to find out more about say for example this character here can just tap on him read about the character read what he's done in various games so this is dynamically collected data from gameplay how he's done most memorable moment for example it is augmented reality so basically the content is tied to the the virtual content is tied to the real world uh you can come in from any angle view it from anywhere we've got mult tablets so we're going to do it at the same time where does that start is it recognizing the images what's going on inde so uh augmented reality I is all about tying virtual content to the real world now there are a few ways to do this and bearing bearing any uh specialist equipment what you can do is use Optical techniques so in this case we're just using the camera nothing else and what it does is it recognizes uh basically a two-dimensional image Target in this case the stones in front this is basically just a square image of stones that we've taught the app to recognize and based on that in relation to that uh image we've placed the virtual content as you can see when we get closer to the individual cards they become more opaque and easier to read so it's a proximity thing so basically we've told the system that compared to say above that guy's head is a piece of information that we can tap on and everything is in relation to that image this of course has some limitations but as technology gets better the same technology can instead of having to recognize a two-dimensional image can recognize the objects themselves assuming you've taught it what they look like the way it works is while it is looking for a specific Target you don't need to stay locked on to that Target while you're viewing the content this would be the default screen before now it has acquired a lock and see here we are no longer looking at the image Target which is what uh everything's tied to but because it is using optical flow we can go off the Target and find something in fact we can take the content away basically and read it at our leisure and when we're done it will inform us that we're off Target and you might have seen the card flew back to where the diarama was and we just go back reacquire get what we want so read about that guy so let's say it's really busy in here and uh we just decide we want to go and sit down on the sofa and keep reading is that is that doable yes it is however you will have to return to pick up the content of somebody else when you're done of course there is an offline reading capability which is basically just like reading a book but there we go we can grab this and assuming we unlocked this this tablet you can security is it indeed it's one of those requirements and there we go and it'll just fly back to where it came from ideally you'd be able to tie virtual content to anything in real time so the obviously the Sci-Fi applications would be I'm interested in something and I can just have the information pop up above it and I see it on my sci-fi Google Glasses or whatever it might be the trick is having technology that can do that on the Fly we're getting there it's all based on uh game engine technology so uh Unity engine in this case then voria such which is a augmented reality engine those play very nicely together and then it was a simple matter of uh putting together the physical component and uh all the content that was going to be there tying it up and putting it on display quite a few images so on average we take about 100 images per object again whatever the camera doesn't see it cannot make a model of so we try to make sure we get all the hidden parts of the model then we throw that into our software and we align the images so each image needswhat we have here is a display board that features some handbuilt and hand painted Warhammer Fork Miniatures that people use them during gameplay and they also tell various stories with them because they all have characters and stuff like that what we have G done is use ored reality to enable people to show Beyond just what you see here the deeper layers of meaning that might be in there so how these things were built what the characters are what story this whole diaram is trying to tell so what we have here is an augmented reality application after you viewed the diagam if you want to find out more about say for example this character here can just tap on him read about the character read what he's done in various games so this is dynamically collected data from gameplay how he's done most memorable moment for example it is augmented reality so basically the content is tied to the the virtual content is tied to the real world uh you can come in from any angle view it from anywhere we've got mult tablets so we're going to do it at the same time where does that start is it recognizing the images what's going on inde so uh augmented reality I is all about tying virtual content to the real world now there are a few ways to do this and bearing bearing any uh specialist equipment what you can do is use Optical techniques so in this case we're just using the camera nothing else and what it does is it recognizes uh basically a two-dimensional image Target in this case the stones in front this is basically just a square image of stones that we've taught the app to recognize and based on that in relation to that uh image we've placed the virtual content as you can see when we get closer to the individual cards they become more opaque and easier to read so it's a proximity thing so basically we've told the system that compared to say above that guy's head is a piece of information that we can tap on and everything is in relation to that image this of course has some limitations but as technology gets better the same technology can instead of having to recognize a two-dimensional image can recognize the objects themselves assuming you've taught it what they look like the way it works is while it is looking for a specific Target you don't need to stay locked on to that Target while you're viewing the content this would be the default screen before now it has acquired a lock and see here we are no longer looking at the image Target which is what uh everything's tied to but because it is using optical flow we can go off the Target and find something in fact we can take the content away basically and read it at our leisure and when we're done it will inform us that we're off Target and you might have seen the card flew back to where the diarama was and we just go back reacquire get what we want so read about that guy so let's say it's really busy in here and uh we just decide we want to go and sit down on the sofa and keep reading is that is that doable yes it is however you will have to return to pick up the content of somebody else when you're done of course there is an offline reading capability which is basically just like reading a book but there we go we can grab this and assuming we unlocked this this tablet you can security is it indeed it's one of those requirements and there we go and it'll just fly back to where it came from ideally you'd be able to tie virtual content to anything in real time so the obviously the Sci-Fi applications would be I'm interested in something and I can just have the information pop up above it and I see it on my sci-fi Google Glasses or whatever it might be the trick is having technology that can do that on the Fly we're getting there it's all based on uh game engine technology so uh Unity engine in this case then voria such which is a augmented reality engine those play very nicely together and then it was a simple matter of uh putting together the physical component and uh all the content that was going to be there tying it up and putting it on display quite a few images so on average we take about 100 images per object again whatever the camera doesn't see it cannot make a model of so we try to make sure we get all the hidden parts of the model then we throw that into our software and we align the images so each image needs\n"