TrueNAS - Full Setup Guide for Setting Up Portainer, Containers and Tailscale #Ultimatehomeserver

Connecting to Next Cloud with Tail Scale VPN

I've been working through the guide on using Tail Scale VPN and I'm excited to report that I've made significant progress. The process involves setting up a connection between my phone and the internal network, which is protected by multiple firewalls. To get started, I disconnected from the internet and reconnected with Next Cloud, which proved to be accessible. However, the prompt indicating that I had access to the LAN IP addresses was a relief.

The next step involved enabling network address translation (NAT) and IP forwarding, as these are essential components of the Tail Scale documentation. Specifically, I enabled IPv4 forwarding and NAT (Network Address Translation). This is necessary because when traffic from my Tail Scale IP, such as my phone's connection, hits the internal LAN, it needs to be routed through the default gateway, which is the router. If there's no NAT in place, the router will not know how to send traffic over the Tail Scale VPN.

Another important consideration is setting up port forwarding through the firewall. As a precautionary measure, I've ensured that I haven't forwarded any ports or done anything else that might compromise my network security. The good news is that Tail Scale VPN punches a hole through the other firewalls and allows me to access LAN IP addresses on my phone.

The next cloud service has been made accessible, and I can log in with ease. This is a significant achievement, and I'm excited to explore more features of this platform. However, I should note that port forwarding and NAT are essential components of the setup process.

Setting up a Tail Scale connection on other devices or machines on my network may be feasible depending on how I want to access or use them. For example, if I want to access a specific application or resource, I might need to set up a Tail Scale connection from that device. The flexibility of this platform is impressive, and I'm eager to explore its capabilities further.

Finally, it's worth mentioning the importance of enabling network address translation (NAT) and IP forwarding when setting up a Tail Scale VPN. While the process can be complex, having these components in place ensures seamless communication between my phone and the internal LAN. The resulting experience is efficient, secure, and reliable.

The Benefits of Setting Up Docker Containers with Freenas

One of the most exciting aspects of using Next Cloud is the integration with Docker containers and the Freenas system. This setup provides a robust security layer, as I can create separate volumes for my data and ensure that sensitive information remains protected. The benefits are numerous:

The Docker volumes provide forward compatibility, allowing me to roll back in case something goes wrong with the virtual machine or its associated file system. If an update requires significant changes, I can safely revert to a previous snapshot of Next Cloud without compromising the entire container.

Having control over which data is backed up and stored on Next Cloud gives me peace of mind. The 321 backup philosophy has been instrumental in helping me manage my backups efficiently.

With Docker containers integrated into the Freenas system, I have complete visibility into what's happening within the system. This level of transparency allows me to monitor and maintain my data safely without having to rely on manual or third-party tools.

The ability to copy and backup data with ease is a welcome feature in this setup. With the USB flash drive solution, I can effortlessly make copies of my data without having to delve into complex configuration or troubleshooting. This convenience has reduced stress and made it easier for me to manage my backups.

Overall, using Docker containers with Freenas provides an unparalleled level of control, security, and efficiency in managing my data. The integration with Next Cloud further enhances this experience by giving me seamless access to all my data, even across different devices.

Final Thoughts

As I complete this tutorial, I want to emphasize the importance of enabling network address translation (NAT) and IP forwarding when setting up a Tail Scale VPN. This crucial step ensures that traffic can be routed properly through the default gateway and allows for efficient communication between devices on my internal LAN.

Setting up Docker containers with Freenas provides an unparalleled level of security, flexibility, and efficiency in managing data. The benefits of using Next Cloud in conjunction with Docker containers are undeniable, as it offers a secure, reliable, and forward-compatible solution for storing and accessing sensitive information.

Signing off, I'm excited to continue exploring the capabilities of Tail Scale VPN and integrating it into my existing systems. If you have any questions or comments regarding this tutorial, please feel free to reach out in the Level One forums.

"WEBVTTKind: captionsLanguage: enthis video is brought to you by fractal and the meshfy 2 light light as in light on your wallet now i've reviewed the meshify 2 previously and should definitely check out that review video the meshfy 2 light issues a few features to save money but the savings are passed on to you the biggest difference that you'll notice aesthetically is the top is different it's riveted in no removability and the top dust filter is magnetic in truth there are a number of differences fractal has supplied a helpful table showing us what the differences are but this still makes a perfectly fine and reasonable platform to build in we still have two front usb 3.0 5 gigabit ports the usb type type-c port is optional in the light version you can add that later but if you're going to use front panel c if you miss it you can add it but don't fear of missing out on the type c it's very rare even that i use it you still get three fans you still have the cool opening door on the front you still have the dust cartridge for the bottom and you still got tons of room for expandability and upgrades on the inside in other words you're not really giving up much at all for a build like this if you're going to use this as the platform that you're going to build you still get rubber grommets and it's still really thick metal construction and it's no secret that i'm a big fan of fractal cases so thanks fractal for sponsoring this video and check out the mesh of i2 lite link below if you're considering something like this for an upcoming build now in our last video we took a look at a trio of possible candidates for your home server including this wonderful little super micro itx system now the reality is that these systems at idle are going to use about 35 watts and most of that is just keeping your spinning rust spinning our configuration is two 20 terabyte mechanical hard drives which i think is plenty for most home users in fact if that's extreme maximum overkill you might think about going with a flash based solution you know four terabyte sata hard drives are more affordable than ever for sure but yeah most of that power goes to keeping your spinning rust spinning but with what we're going to do on the software side when the thing isn't busy the hard drives can actually turn off and then your system is going to idle more like 10 watts 12 15 something like that but the software setup was a little more involved i was going to cram it into the other video but it turns out there's a lot of pitfalls and there's a lot of things that noobs can get frustrated over and i'm going to show you some hard-won knowledge in setting up trunas because out of the box when you set up a virtual machine on trueness it can't access the host yeah i'm going to show you how to fix that it's actually in the level one guide if you want to skip ahead or follow along if you prefer a written guide there is a written guide this video goes with that to hold your hand through the written guide and to explain things as we go along but i've got to go to my desk because i can't do it standing up at this desk when you first log into truenast this is the system that we set up in the last video i've got a dashboard and system information and true nas help and oh we can see that i've got 64 gigs of memory and blah blah blah and the link state is up and everything's good the cpu this is a really low cost very quiet home server 35 watts you can have 20 terabytes of space in a system that's 35 40 watts nominally 12 cores 20 threads anyway our storage here is 220 terabyte mechanical hard drives optionally i would recommend the zfs metadata special device i did a whole other write up on that you can store a map of which files are stored where on the mechanical hard drives and stored on nvme you can also store really small files along with that on on nvme so it really dramatically speeds up a lot of these operations if you do a lot of stuff with virtual machines you can also use multiple nvme name spaces it's like a partition but it's a hardware partition so you can get a couple of cheap two terabyte nvme drives and let's say make 512 gigs or a terabyte of the nvme one name space and use a mirror of those for your metadata and then use the rest of the nvme for you know a raid z mirror for your virtual machine storage or your docker container storage or whatever you want to do as awesome as this is it'll monitor your hardware it'll give you little alerts here and say like oh my gosh your hard drive a variable changed and something is going sideways as awesome as this is in terms of data protection and shares and everything else it has an app system applications are not running available applications manage catalogs it has this thing called true charts it is sort of clunky and not great to use and not super transparent about what's happening with your data and the couple of times that i've used it i've managed to trigger bugs so severe that it would generate like 60 000 snapshots after i let it sit for a couple of weeks and that's not a great situation so i just decided to use the virtualization thing and create a virtual machine and that's what the guide walks you through step by step is i created a new data set on our um z pool and i created a folder called isos and i copied the debian 11 iso the reason it's debian 11 is because this is the same kernel version as a host machine it'll probably be fine also we're going to run docker and docker runs fine on debian 11. so copy the iso configure the virtual machine through the gui we can literally just go add to add the virtual machine we can pick linux for the operating system the name i just call it docker stuff system clock local boot method uefi vnc next all that's fine i've got 20 threads 12 cores i just gave it 12 cores one virtual cpu 12 cores one threads per core so it'll just it's 12. like a six or eight would probably be fine here if you're running an i3 like two or four would be fine cpu mode custom i did host pass through this part doesn't really matter gpu uh host passthrough and i gave it eight gigs of ram i got 64 gigs of ram total but i'm going to give it 8 000. megs which is not quite eight gigs but hey you know it's fine this type ahci it's totally fine the z-vol location tank actually ended up uh yeah tank is fine and the size i made 100 gigabytes now we're not going to use the virtual machine for storage see the thing that you run into like we step back and we we talk big picture here this virtual machine the temptation is to store the docker stuff inside the virtual machine that means all of your important docker data is going to live in this one virtual machine hard drive file but it's hard to take advantage of the really awesome stuff that zfs has in terms of snapshots and compression to a lesser extent and some of the other features if you can't get at your files from the host operating system as they exist in the virtual machine this puts them all in a virtual hard drive file also it's a little bit ephemeral so everything that's going to be in this virtual machine hard drive file i really don't care about we're going to set up docker to actually connect back to the host and store all of its stuff on the host and that's actually not something that works out of the box with truenas and hasn't worked for two years but that's okay i'm going to show you how to fix that and that's in the guide network adapter attach it to a nick now in your drop down you're going to have something other than br0 and uh that's okay that's the first sign of something a problem that's coming you can just pick whatever you want there do next and you're good to go the installation media you can browse and pick the iso see there's my iso folder i picked debian gpu we're not gonna do anything with gpu this is an intel system so it does actually know that it can pass through the alder lake gt1 but we don't we don't need that we're gonna do next save it's gonna create the virtual machine it'll boot off the cd as you see in the guide and then you have this and you next through and you create your virtual machine i just turned on the ssh server and standard system utilities you don't need a desktop environment i accidentally left it checked you should uncheck it from there we can install docker according to the documentation that's also pretty straightforward you just copy paste some commands from then it's like you can do ip 4a and it's like hey it's bridge it's on the network everything's good oh but wait it you can't ping the host so my uh truenast machine was 192.168.1.1 and this virtual machine was 192.168.1.2 so i could hang everything else on my network but i couldn't ping.1.1 the reason for that is out of the box trueness doesn't create the network stubs the way that it should which is amazing debian used to be like that in 2006. it's been fixed since like 2007 or eight so i don't know what has happened here but when you have the networking stack set up this way the kernel will not route traffic from the virtual machine to the actual host just dumps it on the network and it doesn't need to dump it on the network when it's traffic bound for the host because this is broken it means that if you have a file share like nfs or samba you're not going to be able to access the file share that your true nas machine is hosting from any virtual machines that are running on it this is brain dead and there's actually several threads about this in the bug tracker and other stuff so i'm sure that it's on i systems radar and that's probably why some of the moderators in the community over there are a little bit rude when you ask about this further the web gui is actually broken so you're gonna have to use the console to do this this would be would be my recommendation when you plug in a vga monitor on your trunas system it gives you a text console you can get a linux command line thing you can do a lot of stuff there to fix that and that's how we've done it in past tutorials but in this case we can actually just use the little setup menu here that we have to fix it so we'll go into the network settings we'll create a new interface and the interface that we're going to create is called a bridge we're going to call the bridge br0 and then for aliases that's where you enter the ip address it's an ip alias it doesn't really make that super clear and if you're uninitiated you might be wondering wtf but you put the ip address in there you don't want to dhcp on any of this and then for the bridge members you need to pick the interface that is going to be bridged if you have like the super micro system that has two physical nics you can leave one physical nic connected with an ip address and then use the other physical nic to create the bridge and you create the bridge on that neck and then your virtual machines will have the bridge with the second nick and then the uh trueness machine will have an ip address on the first nick there's usually a second ip address in the same subnet of the second nick by the time we start there but this also works just fine with one nick but you can forget using the web gui so when you go through all that and you change it you're going to get some black stuff on the screen that says hey the network change has been applied but i'm going to revert this and roll it back in 46 seconds the problem is that with the bridge it doesn't roll it back correctly your machine will still be inaccessible and it leaves things in a really undefined weird state that i had a hard time fixing from the command line and ultimately i had to reboot the machine so just it's broken in this at this point but go ahead and hit p to save the changes uh and if you drop to a command prompt you'll find that everything is still broken it's not actually forwarding traffic that's okay if you did everything correctly go ahead and just reboot the machine you reboot the machine it will reapply the settings without having the broken settings that it started with and when it applies the correct bridge settings it will actually bring everything up and everything will work correctly all you'll have to do is go back to the gui for virtual machines and edit your virtual machine and change the network part of it you have to go to devices actually you have to go to devices and the network card and edit and change the nick to br zero if it wasn't i found it easier to set it up before doing the bridge thing and that's when i discovered oh wow this is still broken and has been broken for years good lord and also like noobs tripping over this like this would be like some sort of crazy black magic if somebody wasn't explaining to you it's like well it's literally taking the traffic from the virtual machine and just dumping it on the wire without considering if the traffic needs to be dumped on the wire because traffic that is bound from the virtual machine to the machine that's hosting the virtual machine doesn't actually go on the wire it's just serviced locally and so now it makes sense there's more to it than that i'm leaving some stuff out a little frustrated because i went through setting up five of these systems uh for this video well for the last video in this video to sort of see how the performance was and that sort of thing and um it was broken in different ways and this is the only way to sort of consistently fix it but once you reboot you should see entered forwarding state everything should work fine should be able to ping everything everything's good now all we have to do is set up the nfs share for trunas see we go to storage and click the vertical.dot and add the data set and we want to go into advanced options and set nfs v4 and pass through you can go back to the debian docker virtual machine and just do apt install nfs common and what i want to do is create a folder at the root that's just for our nfs share and i want to always instruct docker to store its persistent volumes in this share so with docker you have a pretty good ability to separate what's your data from the infrastructure and code that the people that put the containers together provide and so when we're talking about next cloud there's this whole thing with next cloud it's got calendaring and you can upload documents and there's a thing to process mp3s there's a lot of code there's a lot of files but when you upload a document that's yours and if you need to upgrade next cloud or you need to do something with managing it your stuff being entangled with all that other stuff is very undesirable containerization goes a long way to solve that so we can say this folder is user stuff and we want that to survive and persist and we can destroy all of the other stuff but this folder this should persist so with next cloud that's fair www.html which still admittedly is a little bit of a mess there's a lot of stuff that goes in there it's not yours but that's the folder we need to preserve that's going to have data and everything else in it uh as time moves forward that's a volume but anyway we're gonna make a directory slash nfs and we're gonna mount that you know mount 192.168.1.1 so mmt tank slash nfs docker slash nfs now to make sure this is all working we can try to create a file here it's like oh okay now that doesn't work there's permission to not that's okay this is how you troubleshoot that because i wasn't expecting this either well okay it was a little bit i sort of short circuited things here it's kind of a best practice to create a user set of credentials on trueness that uh this virtual machine will connect as and do work as so i'm going to go ahead and create an nfs docker user and set up a random complicated password as a starting point and then the next part is we need to edit the nfs share nfs docker and tell it the map user and map root groups correspond to root and root this is because we do need to change permissions on the nfs share this is not strictly speaking of best practice but for a home user setup this is probably okay and i'm not sure if there's a better cleaner way to deal with this also when we created that user it created another folder under nfs docker that was the username that's the user's home folder so i'm going to update the mount on the debian docker host to be slash m t such tanks such nfs docker slash nfsdkr which is the name of the user and just mount that at such nfs now we can touch hello world it creates hello world we're good to go the next thing that we got to do is add the portainer gui to manage our docker system we're going to modify their example command a little bit though see the dash v or tainer underscore data remember what i said without the leading slash it's just going to let docker manage the containers and by default that's stored locally that means stored inside that virtual hard drive file for a virtual machine we don't want that we want to store that on nfs so it's going to be nfs so the change here is pretty easy such as nfs pertainer data also notice that in their documentation they refer to a lot of things as portainer enterprise edition well that's licensed that'll cost you money there's also community edition so pertainer dash ce pertainer dash ee is what we're going to be using for all of our stuff now we mounted that nfs volume manually that means it won't survive a reboot we need to edit our fs tab and set it to mount automatically on reboot and the breakdown of this is basically it's the ip address of your free nas machine or your true nas machine and then the path that you want to mount and where you want to mount it in the vm nfs is the type rw async no a time and hard and double zero this is also the point where i discovered that uh i probably want nfs v4 instead of v3 which is a service you can change in uh the shares section of freeness so if we go to nfs here we go to system system settings and services in nfs and hit the pencil enable nfs v4 nfs v4 should be a little bit more performance and should work pretty well in this use case it's not on by default and that's okay you don't don't feel like you have to change it but i changed it and i did the rest of the tutorial from here in nfs v4 mode so when you get that reboot and just verify that slash nfs is is still working and if it is proceed and if not comment on the level one forms and we'll try to get you sorted you can reconnect via ssh to your uh you know debian docker host and do docker ps a to confirm that you know everything is still running you can rerun hello world if you need to or create or restart your your portainer retainer should now be accessible at the ip address of your vm colon 9443 docker ps dash a will show you that in the uh in the far column you should be greeted by the pertainer gui it'll ask you to set a username and password and then from there we can set up next cloud now next cloud's a little complicated it's got a database component and it's got a storage component and the database component is based on mysql this you can even manage multiple docker hosts like if you had more than one docker host it would show here it's really there's there's so much learning you can do here if you get good with this you can get a job tomorrow it's pretty crazy all right we got portainer set up and i honestly didn't expect it to be this much work to get to where we are it acts it makes sense and it's a good learning exercise and it's it's good that you've done all the work up till now but from here i think it's a lot easier and also a lot more forward compatible uh meaning that when you upgrade your trueness system this is going to survive the way that we've set it up with the nfs data sets and everything else that we've gone through so it's pretty easy to go from here to add something like say next cloud so we want to set up next cloud next cloud also has a database and some other dependencies and some volumes and blah blah blah sometimes you read the tutorials it talks about using a docker compose file so we can basically do the same kind of thing in portainer with stacks i put all the next cloud stuff in next cloud at slash nfs and you'll want to set some passwords that are not changeme123 and the username in the database and then the next cloud and so this is going to create multiple containers basically is what's going on there's a container for the database and there's a container for for next cloud because you don't want to run the database inside the next cloud container that's just silly so this you can paste into the stacks area of portainer so there's a web editor here and you just paste it and then you can hit update or deploy and then you'll get these two containers next cloud app and database and you can even hit the little log and see what the log says and so it's like okay we've created the database and everything is good to go this is really super handy these are shortcuts for things you can do from the command line but portainer's gui is very well thought out very well organized and very very good hey back on containers we can see we got the app and the database we can click on the app i kind of want to take a look at the logs because it takes a little while to do the initial setup of next cloud we just have the initializing next cloud 24.0.2.1 which will take a little while so i noticed that it was taking a little while to initialize on our next cloud and normally it doesn't really take that long so it turned out what the problem was that because we're mounting this volume you know over nfs over network file system it was taking a while to initialize because it's next cloud normally writes one file at a time and make sure that that file is written before writing the next one there's a little bit of overhead with that in nfs that's called synchronous rights so i decided to disable synchronous rights using the shell zfs set sync equal disabled tank nfs docker on the whole docker share so that it would basically return immediately without waiting for the right to complete it's a little compounded by the fact that we're on mechanical spinning hard drives this would actually be a lot faster if i had a separate log device or slog device that zfs supports in this particular case it would have helped performance tremendously but you know even though there is a small risk of corruption that's probably fine so i disabled that and within seconds of disabling that the uh the next cloud installation finished and now we can complete it in the browser as we go through the next cloud installation it's pretty straightforward it's pretty point and click there's a lot of great documentation for next cloud they put a really amazing product together you can even self-host collaboration like the whole shared document thing where multiple people are editing something at once well you can do that and it's open source and there's a lot of stuff to love about next cloud this is just one thing you can do all kinds of things with your pertainer setup high hole steam cache a lot of stuff that we covered in the past and even more than that there's all kinds of really awesome stuff in the thread the home server thread at the uh level one forums you can set up here on zettle casting with obsidian you could deploy get lab or get the uh get a yeah yeah you could just it's exciting because you can do a lot but the next thing that we need to work on is how you get here we're not gonna expose this to the internet we've done tutorials in the past if you need to do that you can set up a proxy and forward your public-facing internet traffic to the proxy you could set up a virtual machine in lenode where your internet traffic will go there and then it will selectively forward traffic to these containers your internal machines we've done those tutorials before you should definitely check those out come to the level 1 forums if you have trouble finding those but in this case we're going to do something different instead of exposing this to the internet we're going to set up tail scale tail scale is a vpn service that's built on wire guard but it's meant that you install the tail scale client on all of your devices and then all of your devices are sort of participating in this cloud vpn thing the traffic doesn't actually travel to the internet and back most of the time but it is a really convenient way that you can expose all this stuff to one or more collections of devices but without opening it up to the internet at large it is a deny by default setup so let's get our tail scale docker container set up so that everything can connect to everything else oh and you can use it to expose other stuff on your lan so we're setting this up so that you can get to the docker containers but it'll also be able to get to your trueness machine if you want it to and anything else on your network according to whatever security policy you set so before we set up the container we actually need to set up tail scale itself you can just use you know use tail scale log in use social login and you get this screen by default if you've never used it before which is let's add a device and you click up here and it's just it's kind of worthless what you actually need to do is to go to this screen it's linked to in the in the forum and you want to generate an auth key we're going to use this off key to set up the docker container once you've got your off key you can head on over to fortener and we're ready to copy paste that docker compose yml into the stack area all right once you paste in from you know the the guide on the level one forum or this is what you should be seeing i've updated the volumes to use our nfs volume and you've got this ts underscore auth underscore key very important i've also set up the route so you know 102.168.1.0 you know maybe another network makes sense for your configuration but uh the off key is what you get from the gui on the other system so you'll want to paste that in here and set up the container once i had the tail scale stack going i went to the tail scale website to try to make sure that i could see the client and at first i couldn't see the client so what i ended up doing was running from the command line a command to use the off key and connect and then i could see in the logs in portainer that okay yes it's established it's connected it's on my actual public ip address this will work behind carrier grade matt so if you don't have a real ip address it'll maybe give you some clues in the portainer logs but the important thing was from then on i was connected and so the next thing to do now that that's connected is also get my phone connected so i use google play to install the tailscale app and then i signed into the tailscale app um using my credentials and when i did that you know my phone showed up on the tailskill website and then i connected and when i went to load it gave me an ip address an internal ip address sort of kind of 100 dot that's like the carrier grade that thing so 100 dot something and i could access next cloud but it says oh this is a you're accessing it from a name that um is not the normal name you should edit configsettings.php if you want to allow it from here but really what i want to do is route the whole subnet the way that you do that is you go into the gui for tail scale and that's in the guide on the level one form so you just toggle a thing that says yes this subnet should be something that you route so once i did that and then i disconnected and reconnected on my phone boom next cloud i can log in on my phone with next cloud i haven't forwarded any ports through the firewall i haven't done anything else this is just the tail scale vpn punching a hole through the other firewalls that exist and my my phone having a firewall giving me a little prompt here and saying okay yes you can access that but more importantly i can access other lan ip addresses so no matter what i've mapped in terms of ports with portainer and docker those are going to be accessible with their lan ip address and you can even jump through a couple more hoops and get uh everything on your lan routing through the tail scale connection or you can set up a tail scale connection on a desktop computer or other machines on your network depending on what it is you want to access or how you want to access it but that's a maybe a little bit more of a write up for another day as you go through the guide on the forum one other thing that i'll mention is you do have to enable network address translation and ip forwarding um that is part of the tail scale documentation there's a link to it in the guide but minimally you'll have to enable ipv4 forwarding an ipv640 probably because i'm not sure how it works internally but also ip masquerading or network address translation nat basically the reason for that is because when traffic from your tail scale ip like say traffic from my phone hits the internal lan if the uh if there's no network address translation going on changing the apparent ip address that the traffic is coming from the machine that's responding to the request from my phone we'll try to send it through the default gateway your router to the internet and that of course is going to have no idea how to send traffic over tail skill so if this is new territory for you i'm sorry you're building a really complicated internal model of like the networking and how the networking bits work together and how all the moving moving parts come together and it's sort of complicated and i'm sorry for that and this has also been a little bit more of a hairy uh introduction than i really wanted sorry for that but at this point you should be up and running with tail scale and you have next cloud kind of as proof but you can also go hog wild adding uh docker containers to your freenas system and because you're using nfs storage it is a way you're doing it in kind of a way that's forward compatible so if something bad happens to your virtual machine or your virtual machine file as long as you have that zfs data set you can roll back and do whatever it is that you need to do you could even create more data sets if you had a uh you know a lot of stuff in your next cloud container you can make the next cloud volume its own data set so a data set within a data set zfs doesn't care it's not really within zfs does some some funky stuff to just make its own standalone data set but it's kind of organized hierarchically but it's not actually organized hierarchically for you know how it's uh how it's actually set up under the hood but having your own data set that way means you can start doing fun things like snapshots and controls so you could create a snapshot of just next cloud and when you do an update or major you know thing if you needed to roll back you totally could do that and that's the volume it's not the whole container you don't need all the other stuff it's just your data same with the database aspect of it same with everything else so i don't know about you but i sleep better at night having that that uh that volume and that data set and then also it helps you pick and choose which things you want to back up there's a this three two one backup philosophy which if you haven't heard that before you should check that out because this doesn't cover you in all scenarios 321 does the best job of covering you in all scenarios for backup and having the the docker volumes accessible and browsable and you can see what data is in them pretty plainly and make copies and backups as you need to you can copy the stuff to a usb flash drive pretty easily and with other setups that would be a lot more complicated i like being able to get my hands around that kind of stuff really easily without having to think about it too much or do too much digging so hopefully this tutorial is useful definitely give it a thumbs up or a comment or something if it was you know really good for you or whatever um one of this is level one i'm signing out you can find me in the level one forums and hey you know as time goes on the guide will be updated so check that out first sort of read through it all before embarking signing out you can find me in the level one forumsthis video is brought to you by fractal and the meshfy 2 light light as in light on your wallet now i've reviewed the meshify 2 previously and should definitely check out that review video the meshfy 2 light issues a few features to save money but the savings are passed on to you the biggest difference that you'll notice aesthetically is the top is different it's riveted in no removability and the top dust filter is magnetic in truth there are a number of differences fractal has supplied a helpful table showing us what the differences are but this still makes a perfectly fine and reasonable platform to build in we still have two front usb 3.0 5 gigabit ports the usb type type-c port is optional in the light version you can add that later but if you're going to use front panel c if you miss it you can add it but don't fear of missing out on the type c it's very rare even that i use it you still get three fans you still have the cool opening door on the front you still have the dust cartridge for the bottom and you still got tons of room for expandability and upgrades on the inside in other words you're not really giving up much at all for a build like this if you're going to use this as the platform that you're going to build you still get rubber grommets and it's still really thick metal construction and it's no secret that i'm a big fan of fractal cases so thanks fractal for sponsoring this video and check out the mesh of i2 lite link below if you're considering something like this for an upcoming build now in our last video we took a look at a trio of possible candidates for your home server including this wonderful little super micro itx system now the reality is that these systems at idle are going to use about 35 watts and most of that is just keeping your spinning rust spinning our configuration is two 20 terabyte mechanical hard drives which i think is plenty for most home users in fact if that's extreme maximum overkill you might think about going with a flash based solution you know four terabyte sata hard drives are more affordable than ever for sure but yeah most of that power goes to keeping your spinning rust spinning but with what we're going to do on the software side when the thing isn't busy the hard drives can actually turn off and then your system is going to idle more like 10 watts 12 15 something like that but the software setup was a little more involved i was going to cram it into the other video but it turns out there's a lot of pitfalls and there's a lot of things that noobs can get frustrated over and i'm going to show you some hard-won knowledge in setting up trunas because out of the box when you set up a virtual machine on trueness it can't access the host yeah i'm going to show you how to fix that it's actually in the level one guide if you want to skip ahead or follow along if you prefer a written guide there is a written guide this video goes with that to hold your hand through the written guide and to explain things as we go along but i've got to go to my desk because i can't do it standing up at this desk when you first log into truenast this is the system that we set up in the last video i've got a dashboard and system information and true nas help and oh we can see that i've got 64 gigs of memory and blah blah blah and the link state is up and everything's good the cpu this is a really low cost very quiet home server 35 watts you can have 20 terabytes of space in a system that's 35 40 watts nominally 12 cores 20 threads anyway our storage here is 220 terabyte mechanical hard drives optionally i would recommend the zfs metadata special device i did a whole other write up on that you can store a map of which files are stored where on the mechanical hard drives and stored on nvme you can also store really small files along with that on on nvme so it really dramatically speeds up a lot of these operations if you do a lot of stuff with virtual machines you can also use multiple nvme name spaces it's like a partition but it's a hardware partition so you can get a couple of cheap two terabyte nvme drives and let's say make 512 gigs or a terabyte of the nvme one name space and use a mirror of those for your metadata and then use the rest of the nvme for you know a raid z mirror for your virtual machine storage or your docker container storage or whatever you want to do as awesome as this is it'll monitor your hardware it'll give you little alerts here and say like oh my gosh your hard drive a variable changed and something is going sideways as awesome as this is in terms of data protection and shares and everything else it has an app system applications are not running available applications manage catalogs it has this thing called true charts it is sort of clunky and not great to use and not super transparent about what's happening with your data and the couple of times that i've used it i've managed to trigger bugs so severe that it would generate like 60 000 snapshots after i let it sit for a couple of weeks and that's not a great situation so i just decided to use the virtualization thing and create a virtual machine and that's what the guide walks you through step by step is i created a new data set on our um z pool and i created a folder called isos and i copied the debian 11 iso the reason it's debian 11 is because this is the same kernel version as a host machine it'll probably be fine also we're going to run docker and docker runs fine on debian 11. so copy the iso configure the virtual machine through the gui we can literally just go add to add the virtual machine we can pick linux for the operating system the name i just call it docker stuff system clock local boot method uefi vnc next all that's fine i've got 20 threads 12 cores i just gave it 12 cores one virtual cpu 12 cores one threads per core so it'll just it's 12. like a six or eight would probably be fine here if you're running an i3 like two or four would be fine cpu mode custom i did host pass through this part doesn't really matter gpu uh host passthrough and i gave it eight gigs of ram i got 64 gigs of ram total but i'm going to give it 8 000. megs which is not quite eight gigs but hey you know it's fine this type ahci it's totally fine the z-vol location tank actually ended up uh yeah tank is fine and the size i made 100 gigabytes now we're not going to use the virtual machine for storage see the thing that you run into like we step back and we we talk big picture here this virtual machine the temptation is to store the docker stuff inside the virtual machine that means all of your important docker data is going to live in this one virtual machine hard drive file but it's hard to take advantage of the really awesome stuff that zfs has in terms of snapshots and compression to a lesser extent and some of the other features if you can't get at your files from the host operating system as they exist in the virtual machine this puts them all in a virtual hard drive file also it's a little bit ephemeral so everything that's going to be in this virtual machine hard drive file i really don't care about we're going to set up docker to actually connect back to the host and store all of its stuff on the host and that's actually not something that works out of the box with truenas and hasn't worked for two years but that's okay i'm going to show you how to fix that and that's in the guide network adapter attach it to a nick now in your drop down you're going to have something other than br0 and uh that's okay that's the first sign of something a problem that's coming you can just pick whatever you want there do next and you're good to go the installation media you can browse and pick the iso see there's my iso folder i picked debian gpu we're not gonna do anything with gpu this is an intel system so it does actually know that it can pass through the alder lake gt1 but we don't we don't need that we're gonna do next save it's gonna create the virtual machine it'll boot off the cd as you see in the guide and then you have this and you next through and you create your virtual machine i just turned on the ssh server and standard system utilities you don't need a desktop environment i accidentally left it checked you should uncheck it from there we can install docker according to the documentation that's also pretty straightforward you just copy paste some commands from then it's like you can do ip 4a and it's like hey it's bridge it's on the network everything's good oh but wait it you can't ping the host so my uh truenast machine was 192.168.1.1 and this virtual machine was 192.168.1.2 so i could hang everything else on my network but i couldn't ping.1.1 the reason for that is out of the box trueness doesn't create the network stubs the way that it should which is amazing debian used to be like that in 2006. it's been fixed since like 2007 or eight so i don't know what has happened here but when you have the networking stack set up this way the kernel will not route traffic from the virtual machine to the actual host just dumps it on the network and it doesn't need to dump it on the network when it's traffic bound for the host because this is broken it means that if you have a file share like nfs or samba you're not going to be able to access the file share that your true nas machine is hosting from any virtual machines that are running on it this is brain dead and there's actually several threads about this in the bug tracker and other stuff so i'm sure that it's on i systems radar and that's probably why some of the moderators in the community over there are a little bit rude when you ask about this further the web gui is actually broken so you're gonna have to use the console to do this this would be would be my recommendation when you plug in a vga monitor on your trunas system it gives you a text console you can get a linux command line thing you can do a lot of stuff there to fix that and that's how we've done it in past tutorials but in this case we can actually just use the little setup menu here that we have to fix it so we'll go into the network settings we'll create a new interface and the interface that we're going to create is called a bridge we're going to call the bridge br0 and then for aliases that's where you enter the ip address it's an ip alias it doesn't really make that super clear and if you're uninitiated you might be wondering wtf but you put the ip address in there you don't want to dhcp on any of this and then for the bridge members you need to pick the interface that is going to be bridged if you have like the super micro system that has two physical nics you can leave one physical nic connected with an ip address and then use the other physical nic to create the bridge and you create the bridge on that neck and then your virtual machines will have the bridge with the second nick and then the uh trueness machine will have an ip address on the first nick there's usually a second ip address in the same subnet of the second nick by the time we start there but this also works just fine with one nick but you can forget using the web gui so when you go through all that and you change it you're going to get some black stuff on the screen that says hey the network change has been applied but i'm going to revert this and roll it back in 46 seconds the problem is that with the bridge it doesn't roll it back correctly your machine will still be inaccessible and it leaves things in a really undefined weird state that i had a hard time fixing from the command line and ultimately i had to reboot the machine so just it's broken in this at this point but go ahead and hit p to save the changes uh and if you drop to a command prompt you'll find that everything is still broken it's not actually forwarding traffic that's okay if you did everything correctly go ahead and just reboot the machine you reboot the machine it will reapply the settings without having the broken settings that it started with and when it applies the correct bridge settings it will actually bring everything up and everything will work correctly all you'll have to do is go back to the gui for virtual machines and edit your virtual machine and change the network part of it you have to go to devices actually you have to go to devices and the network card and edit and change the nick to br zero if it wasn't i found it easier to set it up before doing the bridge thing and that's when i discovered oh wow this is still broken and has been broken for years good lord and also like noobs tripping over this like this would be like some sort of crazy black magic if somebody wasn't explaining to you it's like well it's literally taking the traffic from the virtual machine and just dumping it on the wire without considering if the traffic needs to be dumped on the wire because traffic that is bound from the virtual machine to the machine that's hosting the virtual machine doesn't actually go on the wire it's just serviced locally and so now it makes sense there's more to it than that i'm leaving some stuff out a little frustrated because i went through setting up five of these systems uh for this video well for the last video in this video to sort of see how the performance was and that sort of thing and um it was broken in different ways and this is the only way to sort of consistently fix it but once you reboot you should see entered forwarding state everything should work fine should be able to ping everything everything's good now all we have to do is set up the nfs share for trunas see we go to storage and click the vertical.dot and add the data set and we want to go into advanced options and set nfs v4 and pass through you can go back to the debian docker virtual machine and just do apt install nfs common and what i want to do is create a folder at the root that's just for our nfs share and i want to always instruct docker to store its persistent volumes in this share so with docker you have a pretty good ability to separate what's your data from the infrastructure and code that the people that put the containers together provide and so when we're talking about next cloud there's this whole thing with next cloud it's got calendaring and you can upload documents and there's a thing to process mp3s there's a lot of code there's a lot of files but when you upload a document that's yours and if you need to upgrade next cloud or you need to do something with managing it your stuff being entangled with all that other stuff is very undesirable containerization goes a long way to solve that so we can say this folder is user stuff and we want that to survive and persist and we can destroy all of the other stuff but this folder this should persist so with next cloud that's fair www.html which still admittedly is a little bit of a mess there's a lot of stuff that goes in there it's not yours but that's the folder we need to preserve that's going to have data and everything else in it uh as time moves forward that's a volume but anyway we're gonna make a directory slash nfs and we're gonna mount that you know mount 192.168.1.1 so mmt tank slash nfs docker slash nfs now to make sure this is all working we can try to create a file here it's like oh okay now that doesn't work there's permission to not that's okay this is how you troubleshoot that because i wasn't expecting this either well okay it was a little bit i sort of short circuited things here it's kind of a best practice to create a user set of credentials on trueness that uh this virtual machine will connect as and do work as so i'm going to go ahead and create an nfs docker user and set up a random complicated password as a starting point and then the next part is we need to edit the nfs share nfs docker and tell it the map user and map root groups correspond to root and root this is because we do need to change permissions on the nfs share this is not strictly speaking of best practice but for a home user setup this is probably okay and i'm not sure if there's a better cleaner way to deal with this also when we created that user it created another folder under nfs docker that was the username that's the user's home folder so i'm going to update the mount on the debian docker host to be slash m t such tanks such nfs docker slash nfsdkr which is the name of the user and just mount that at such nfs now we can touch hello world it creates hello world we're good to go the next thing that we got to do is add the portainer gui to manage our docker system we're going to modify their example command a little bit though see the dash v or tainer underscore data remember what i said without the leading slash it's just going to let docker manage the containers and by default that's stored locally that means stored inside that virtual hard drive file for a virtual machine we don't want that we want to store that on nfs so it's going to be nfs so the change here is pretty easy such as nfs pertainer data also notice that in their documentation they refer to a lot of things as portainer enterprise edition well that's licensed that'll cost you money there's also community edition so pertainer dash ce pertainer dash ee is what we're going to be using for all of our stuff now we mounted that nfs volume manually that means it won't survive a reboot we need to edit our fs tab and set it to mount automatically on reboot and the breakdown of this is basically it's the ip address of your free nas machine or your true nas machine and then the path that you want to mount and where you want to mount it in the vm nfs is the type rw async no a time and hard and double zero this is also the point where i discovered that uh i probably want nfs v4 instead of v3 which is a service you can change in uh the shares section of freeness so if we go to nfs here we go to system system settings and services in nfs and hit the pencil enable nfs v4 nfs v4 should be a little bit more performance and should work pretty well in this use case it's not on by default and that's okay you don't don't feel like you have to change it but i changed it and i did the rest of the tutorial from here in nfs v4 mode so when you get that reboot and just verify that slash nfs is is still working and if it is proceed and if not comment on the level one forms and we'll try to get you sorted you can reconnect via ssh to your uh you know debian docker host and do docker ps a to confirm that you know everything is still running you can rerun hello world if you need to or create or restart your your portainer retainer should now be accessible at the ip address of your vm colon 9443 docker ps dash a will show you that in the uh in the far column you should be greeted by the pertainer gui it'll ask you to set a username and password and then from there we can set up next cloud now next cloud's a little complicated it's got a database component and it's got a storage component and the database component is based on mysql this you can even manage multiple docker hosts like if you had more than one docker host it would show here it's really there's there's so much learning you can do here if you get good with this you can get a job tomorrow it's pretty crazy all right we got portainer set up and i honestly didn't expect it to be this much work to get to where we are it acts it makes sense and it's a good learning exercise and it's it's good that you've done all the work up till now but from here i think it's a lot easier and also a lot more forward compatible uh meaning that when you upgrade your trueness system this is going to survive the way that we've set it up with the nfs data sets and everything else that we've gone through so it's pretty easy to go from here to add something like say next cloud so we want to set up next cloud next cloud also has a database and some other dependencies and some volumes and blah blah blah sometimes you read the tutorials it talks about using a docker compose file so we can basically do the same kind of thing in portainer with stacks i put all the next cloud stuff in next cloud at slash nfs and you'll want to set some passwords that are not changeme123 and the username in the database and then the next cloud and so this is going to create multiple containers basically is what's going on there's a container for the database and there's a container for for next cloud because you don't want to run the database inside the next cloud container that's just silly so this you can paste into the stacks area of portainer so there's a web editor here and you just paste it and then you can hit update or deploy and then you'll get these two containers next cloud app and database and you can even hit the little log and see what the log says and so it's like okay we've created the database and everything is good to go this is really super handy these are shortcuts for things you can do from the command line but portainer's gui is very well thought out very well organized and very very good hey back on containers we can see we got the app and the database we can click on the app i kind of want to take a look at the logs because it takes a little while to do the initial setup of next cloud we just have the initializing next cloud 24.0.2.1 which will take a little while so i noticed that it was taking a little while to initialize on our next cloud and normally it doesn't really take that long so it turned out what the problem was that because we're mounting this volume you know over nfs over network file system it was taking a while to initialize because it's next cloud normally writes one file at a time and make sure that that file is written before writing the next one there's a little bit of overhead with that in nfs that's called synchronous rights so i decided to disable synchronous rights using the shell zfs set sync equal disabled tank nfs docker on the whole docker share so that it would basically return immediately without waiting for the right to complete it's a little compounded by the fact that we're on mechanical spinning hard drives this would actually be a lot faster if i had a separate log device or slog device that zfs supports in this particular case it would have helped performance tremendously but you know even though there is a small risk of corruption that's probably fine so i disabled that and within seconds of disabling that the uh the next cloud installation finished and now we can complete it in the browser as we go through the next cloud installation it's pretty straightforward it's pretty point and click there's a lot of great documentation for next cloud they put a really amazing product together you can even self-host collaboration like the whole shared document thing where multiple people are editing something at once well you can do that and it's open source and there's a lot of stuff to love about next cloud this is just one thing you can do all kinds of things with your pertainer setup high hole steam cache a lot of stuff that we covered in the past and even more than that there's all kinds of really awesome stuff in the thread the home server thread at the uh level one forums you can set up here on zettle casting with obsidian you could deploy get lab or get the uh get a yeah yeah you could just it's exciting because you can do a lot but the next thing that we need to work on is how you get here we're not gonna expose this to the internet we've done tutorials in the past if you need to do that you can set up a proxy and forward your public-facing internet traffic to the proxy you could set up a virtual machine in lenode where your internet traffic will go there and then it will selectively forward traffic to these containers your internal machines we've done those tutorials before you should definitely check those out come to the level 1 forums if you have trouble finding those but in this case we're going to do something different instead of exposing this to the internet we're going to set up tail scale tail scale is a vpn service that's built on wire guard but it's meant that you install the tail scale client on all of your devices and then all of your devices are sort of participating in this cloud vpn thing the traffic doesn't actually travel to the internet and back most of the time but it is a really convenient way that you can expose all this stuff to one or more collections of devices but without opening it up to the internet at large it is a deny by default setup so let's get our tail scale docker container set up so that everything can connect to everything else oh and you can use it to expose other stuff on your lan so we're setting this up so that you can get to the docker containers but it'll also be able to get to your trueness machine if you want it to and anything else on your network according to whatever security policy you set so before we set up the container we actually need to set up tail scale itself you can just use you know use tail scale log in use social login and you get this screen by default if you've never used it before which is let's add a device and you click up here and it's just it's kind of worthless what you actually need to do is to go to this screen it's linked to in the in the forum and you want to generate an auth key we're going to use this off key to set up the docker container once you've got your off key you can head on over to fortener and we're ready to copy paste that docker compose yml into the stack area all right once you paste in from you know the the guide on the level one forum or this is what you should be seeing i've updated the volumes to use our nfs volume and you've got this ts underscore auth underscore key very important i've also set up the route so you know 102.168.1.0 you know maybe another network makes sense for your configuration but uh the off key is what you get from the gui on the other system so you'll want to paste that in here and set up the container once i had the tail scale stack going i went to the tail scale website to try to make sure that i could see the client and at first i couldn't see the client so what i ended up doing was running from the command line a command to use the off key and connect and then i could see in the logs in portainer that okay yes it's established it's connected it's on my actual public ip address this will work behind carrier grade matt so if you don't have a real ip address it'll maybe give you some clues in the portainer logs but the important thing was from then on i was connected and so the next thing to do now that that's connected is also get my phone connected so i use google play to install the tailscale app and then i signed into the tailscale app um using my credentials and when i did that you know my phone showed up on the tailskill website and then i connected and when i went to load it gave me an ip address an internal ip address sort of kind of 100 dot that's like the carrier grade that thing so 100 dot something and i could access next cloud but it says oh this is a you're accessing it from a name that um is not the normal name you should edit configsettings.php if you want to allow it from here but really what i want to do is route the whole subnet the way that you do that is you go into the gui for tail scale and that's in the guide on the level one form so you just toggle a thing that says yes this subnet should be something that you route so once i did that and then i disconnected and reconnected on my phone boom next cloud i can log in on my phone with next cloud i haven't forwarded any ports through the firewall i haven't done anything else this is just the tail scale vpn punching a hole through the other firewalls that exist and my my phone having a firewall giving me a little prompt here and saying okay yes you can access that but more importantly i can access other lan ip addresses so no matter what i've mapped in terms of ports with portainer and docker those are going to be accessible with their lan ip address and you can even jump through a couple more hoops and get uh everything on your lan routing through the tail scale connection or you can set up a tail scale connection on a desktop computer or other machines on your network depending on what it is you want to access or how you want to access it but that's a maybe a little bit more of a write up for another day as you go through the guide on the forum one other thing that i'll mention is you do have to enable network address translation and ip forwarding um that is part of the tail scale documentation there's a link to it in the guide but minimally you'll have to enable ipv4 forwarding an ipv640 probably because i'm not sure how it works internally but also ip masquerading or network address translation nat basically the reason for that is because when traffic from your tail scale ip like say traffic from my phone hits the internal lan if the uh if there's no network address translation going on changing the apparent ip address that the traffic is coming from the machine that's responding to the request from my phone we'll try to send it through the default gateway your router to the internet and that of course is going to have no idea how to send traffic over tail skill so if this is new territory for you i'm sorry you're building a really complicated internal model of like the networking and how the networking bits work together and how all the moving moving parts come together and it's sort of complicated and i'm sorry for that and this has also been a little bit more of a hairy uh introduction than i really wanted sorry for that but at this point you should be up and running with tail scale and you have next cloud kind of as proof but you can also go hog wild adding uh docker containers to your freenas system and because you're using nfs storage it is a way you're doing it in kind of a way that's forward compatible so if something bad happens to your virtual machine or your virtual machine file as long as you have that zfs data set you can roll back and do whatever it is that you need to do you could even create more data sets if you had a uh you know a lot of stuff in your next cloud container you can make the next cloud volume its own data set so a data set within a data set zfs doesn't care it's not really within zfs does some some funky stuff to just make its own standalone data set but it's kind of organized hierarchically but it's not actually organized hierarchically for you know how it's uh how it's actually set up under the hood but having your own data set that way means you can start doing fun things like snapshots and controls so you could create a snapshot of just next cloud and when you do an update or major you know thing if you needed to roll back you totally could do that and that's the volume it's not the whole container you don't need all the other stuff it's just your data same with the database aspect of it same with everything else so i don't know about you but i sleep better at night having that that uh that volume and that data set and then also it helps you pick and choose which things you want to back up there's a this three two one backup philosophy which if you haven't heard that before you should check that out because this doesn't cover you in all scenarios 321 does the best job of covering you in all scenarios for backup and having the the docker volumes accessible and browsable and you can see what data is in them pretty plainly and make copies and backups as you need to you can copy the stuff to a usb flash drive pretty easily and with other setups that would be a lot more complicated i like being able to get my hands around that kind of stuff really easily without having to think about it too much or do too much digging so hopefully this tutorial is useful definitely give it a thumbs up or a comment or something if it was you know really good for you or whatever um one of this is level one i'm signing out you can find me in the level one forums and hey you know as time goes on the guide will be updated so check that out first sort of read through it all before embarking signing out you can find me in the level one forums\n"