The 2019 Home Lab

  • Comments posted to this topic are about the item The 2019 Home Lab

  • I've never had my own dedicated lab environment until recently, when I decided that I really need to start learning Powershell. A book that I bought contains the instructions for creating a lab in Hyper-V and I now have a laptop with Windows 10 Professional (so I can run Hyper-V) that has a Hyper-V VM with Windows Server 2016 and SQL Server 2016. It was interesting to see how to create an Active Directory and other aspects of installing Windows Server.

    Hyper-V was nowhere near as difficult to set up as I had been led to believe by others and I'll continue to use it for other projects.

    I need to learn Powershell but I would like to learn Docker - so that will have to get into the queue.

  • I use virtual box and VMWare as some vendors (such as Teradata) supply virtual machines preconfigured.  Hortonworks and Cloudera both supplied their entire stacks as a VM.

    I've had a few issues with Windows for Linux extensions.  It seems to be very fussy about what it will allow for a root password.  It was something like a maximum of 8 alphanumeric characters which seems a bit daft given the need for strong passwords.  Apart from that WLS is a massive step forward.

    The thing I like about VirtualBox with Vagrant is that I can build a base box, install a load of software and then bake that box as something to use as a future base box.

    Containers are definitely something we should all get familiar and comfortable with.  I spend my own cash on a Pluralsight subscription.  the courses on Docker/Kubernetes by Nigel Poulton are fantastic whether you just want an overview or a really deep dive.

    I have worked with AWS in two separate companies.  I've found the AWS cost calculator to be a work of fantasy.  I have never seen an AWS bill that remotely resembled what the cost calculator estimated.  That is one source of worry regarding the costs of AWS and cloud.  Experience has also taught me that I don't know enough to know whether my stuff is adequately secured.  There are various horror stories about people's cloud infrastructure being hacked, huge resources being spun up on their account without them knowing and a huge bill being the result.

  • I recently bought myself a new PC, which I use as a Lab and Home desktop. I have both Ubuntu (Kubuntu 19.04) and Windows 10 Pro install on separate discs, and dual boot with Grub.

    I really like the Linux environment, as I'm constantly spinning up (and destroying) containers on it using LXC. I have a few profiles set up for different needs, for example the container using zfs or ext4 as it's file system (SQL Server doesn't support zfs), and whether the container is exposed to the network. I've also got a home OpenLDAP service running on a couple of Pi 3B+s, which means that I'm half way to a single sign on; meaning that  I have a bunch of service accounts all ready for instance authentication for the containers on those exposed to the network.

    One of the big things I notice between using Ubuntu and Windows is how less resource hungry the Linux environment is. I can happily have 2/3 containers running, along with a few applications and I'm probably only sitting at 2.3GB of memory used (currently out of 16GB, but I'm probably going to up to 32GB next month, just because I can), and the CPU is almost completely idle (less than 5%). On Windows, even before I start opening any applications it's using over 4GB and the CPU will be idling between 5-10% instead.

    Building a lab has been really good for learning, and I'm just not scared of breaking things any more. The benefit of snapshoting my containers before I do something experimental is great, as if i do break them (as I often have) i just role back. They can are a little costly if you want one that is going to work well, and survive for several years, but I don't regret it, and I'm sure it'll pay me back before it needs replacing.

    • This reply was modified 4 years, 8 months ago by  Thom A.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • My everyday+lab setup is a well-spec'd Macbook Pro (2.8GHz core i7, 16Gb RAM, 1Tb SSD). I have VirtualBox and Docker installed.

    For Windows development / testing, I have a Windows10 licence and several VMs, including a plain "ready-to-clone" one. I have SQL-Server Express (and Dev) and a licensed copy of MS-Access 2016 installed in various VMs.

    For Linux development / testing, I have many VMs, mainly various Ubuntu versions. Some of these are "ready-to-clone" templates.

    For Mac and Web development / testing, I have the native Mac and Docker.

    Also available in the Home Lab: Raspberry Pi, Chromebook, HP Pre3, Alexa, Turing Tumble 😉

    I am one of the people mentioned in the article who are reluctant to use Azure, AWS, GCP for self-funded projects due to the on-going costs.

  • I've recently purchased a nice Thinkpad, and have set up Hyper-V to manage virtual environments. At the moment I have one VM running Linux and a second one running a 180 license of Windows Server 2016. SQL developer edition all the way. Visual Studio Community edition, and a few free third party SSIS tools.

    My only regret is that I didn't go ahead and get more memory, although I may upgrade as a Christmas present to myself.

    Luther

     

  • I always had that dream, a home lab. Actually I have one with Windows  10 pro  Laptop with 32GB of RAM - you really must get as much possible memory you need, with few Virtual Box VM. Because I'm learning to deal with AWS and Cloud Datawarehouses I have an account there and try to keep the costs at a minimum.

    It isn't easy, install and maintain all that different virtual environments. Try to learn how to deal with Linux, python, AWS, etc. It's like to be the entire I.T team in one person! About costs, using the Cloud or investing in powerful PCs with the high TCO, it's not easy to evaluate

  • I consider myself an anomaly here with the home lab, but I've been using refurbished server-grade equipment as testbeds for various things. I do have a large portion of my test lab in Azure for various scenarios, but it really is more cost effective for my specific needs to maintain older refurbished server equipment for a larger test environment. I use older HP DL380 G7 and G8 servers as the compute side. Believe it or not, you could purchase one of these from a local refurbisher or Ebay for not much cash. A dual socket Intel Xeon CPU platform with 16 cores and 256GB of RAM can be purchased for under $1000 USD. Heck - my largest server is an HP DL580 Gen7. It's got 40 physical cores and 512GB of RAM and I picked it up for just $1600 shipped and was ready to go. You can't build it new for anywhere close to that. These are built to run 24x7 and replacement parts are cheap, also from Ebay or local refurbishers.

    If you want shared storage, as I have here, Synology makes a network-attached storage unit that is inexpensive, can connect to any of the hypervisors out there with the various protocol support, and double as a home file server, media server, etc. VMware ESXi and Microsoft Hyper-V are free, and the 180-day licenses or anything from MSDN will cover you from an OSE perspective.

    Realistically, you could build a nice home lab single server with enough local storage that's not too loud, doesn't draw much power at all while running, and is built to leave on if needed for under $1000 USD. I go this route because when I'm done with the equipment after a few years, I either give it away to friends that need home labs themselves, or put the parts from these machines back on Ebay and reinvest.

  • This is just based on my personal perspective, but today in 2019, I wouldn't invest in an on-prem home lab. For my personal PC, I prefer a smaller notebook, and finding a model that supports the extra HD and RAM requirements to run multiple VMs means paying extra $,$$$. I'd rather build up a training environment on my work provided laptop or RDP into a development server from home.

    As a DBA and architect, I like to keep up to speed on Active Directory or Hadoop, but I'm totally not interested in going through the motions of installing and configuring the environment. Microsoft provides some [free] self paced labs that are preinstalled and configured with everything you need to train for a specific product or task.

    https://www.microsoft.com/handsonlabs/selfpacedlabs

     

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I definitely get it, and the larger home lab is certainly not for most folks.  I just have testing needs for certain levels of scale that laptops and desktops cannot provide, unfortunately. If work provides a test / dev lab, use it! (I am my own work so I have to provide that for myself.) For most folks, a laptop or desktop with a decent amount of RAM, Hyper-V or VirtualBox or VMware Workstation, and fast disks should be all you need to explore and validate any scenario you can think of.

  • At home I've got an old desktop, with two 1-TB HD's. It is what I do much of my training on. This is at least 5 years old and is beginning to experience problems. The latest Windows 10 update has left me in a bad way. My Windows profile is now what's called a temporary profile, things are just bad. But it's dragging along and I simply can't afford to upgrade it.

    Another part of my home lab is my laptop, which is 2 years old. It's got a 512 GB SSD drive. I live in New Mexico, which has huge tracks of land without any Wi-Fi, cell phone coverage, etc. Therefore I absolutely MUST have as much as I can get onto my system, because during my long commute to work across those long stretches of nothing, if I don't have it on my laptop - too bad, so sad.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • BrainDonor wrote:

    Hyper-V was nowhere near as difficult to set up as I had been led to believe by others and I'll continue to use it for other projects.

    I need to learn Powershell but I would like to learn Docker - so that will have to get into the queue.

    FWIW, Hyper-V isn't that hard, but it was inconvenient, esp for wifi adapters. Didn't easily let guests out. I think it's gotten better over the years.

    For Docker, small plug: https://www.sqlservercentral.com/stairways/stairway-to-database-containers

  • At home I tend to rebuild my main PC every 18 to 24 months, and then re-purpose my former desktop into a server.  Not necessarily elegant but gets the job done.  My current PC is a Ryzen 5 1600, on Windows 10 and I use VirtualBox for VMs there with SQL Server 2017 Dev.  My prior PC was an AMD FX-8320 that is now running Ubuntu 18.04 and SQL Server 2017 Dev.  I'm hoping to get into containers on that machine, but I'll probably have to upgrade the RAM on the Ubuntu box before I get too deep since that one only has 8 GB.

    My other computers most people would laugh at, such as my low power Celeron J3455 that I run as an always on file server / Plex server, and my home laptop is just an Atom processor which only has 32 GB of flash memory soldered into it instead of a true SSD or disk.  😉

  • I just started building out a new home hybrid lab yesterday. On the local side, it's VMware Workstation on a beefy Windows 10 laptop. I'm getting some templates built out for a few different base operating systems - Windows 10, Server 2019, Ubuntu, RHEL 7, etc. The idea is to have base VM templates to generate linked clones faster and with less storage overhead than a full install from ISO every time. The final configurations for various roles (client workstations, SQL Server instances, DC's, AG's, Docker hosts, k8s clusters, PAM servers, etc.) will happen via source-controlled scripts. I want to be able to closely replicate most of the common deployment topologies that one would encounter in the wild. I'll need to spend some time learning the VMWare Workstation REST API to get this lab as fully automated as possible too. It should be more than "just a lab" when it's all put together; more a reference for various best-practice configurations that could be adapted to production.

    On the cloud side, I'm a big fan of infrastructure as code, so the plan will be to use more source-controlled configurations and scripts to be able to quickly and consistently build and tear down environments across cloud providers as the need arises. I know a lot of people are scared of cloud costs getting out of control, but there are basic things you can and should do to prevent problems such as by configuring automatic shutdowns, usage alerts, and spending caps. In a lab environment, it's really easy to control costs. It's when you're forced to scale in production to keep a business operating that the unknown is a lot more scary.

  • I've had a home lab for decades and have found it invaluable for keeping up with the technologies I use and want to explore. This was quite expensive back when bare metal was the only option but now, with VMs and containers, the cost is much more reasonable and provides the flexibility to run a lab on a single machine.

    One can get a 2TB NVMe stick for under 500 USD nowadays (I'll never buy another HDD or SATA SSD unless mandated by tight budget constraints), which is enough storage and IOPS for many VMs and containers that can be run concurrently if the host has enough RAM. I bought a decent 6-core Win10 Pro laptop with 32GB RAM a few months ago and it can run multiple VMs and containers concurrently with very good performance. This allows me to do development and testing on virtually (no pun intended) any SQL Server or OS platform with or without a network connection and push to the cloud as appropriate. I also keep my older boxes around for various purposes like wired network performance testing.

     

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply