.NET Core is Microsoft’s next major version of .NET. There are many new things to be excited about. It's modular, it has flexible deployment, it’s open sourced and, of course, Microsoft supports it. What excites me the most however is that it is now platform independent. What does this change? Everything!
.NET has a huge ball and chain around its ankle: The Windows Operating System. Not only is the runtime product constrained to it, but development is too; as a software developer, that puts me at odds. On one hand, .NET is a wonderfully architected, mature, and stable technology. On the other, the world runs on more than just Windows, especially as Android, iOS, and IoT (Internet of Things) start taking over the world. There are options, such as Xamarin for mobile app development or Mono for platform development, but these require extra tooling, cost (in the case of Xamarin), and cognitive overhead. With .NET Core, there is a common and simple way to build and run apps independent of platform. While .NET Core may not run on Android or iOS yet, there are huge advantages for developing server applications, whether in-house or on the cloud.
Dev and Ops Are Made More Efficient
There are several environments needed in between coding and deployment. Let's say there’s four environments on a project: Development (Dev), Continuous Integration (CI), Quality Assurance (QA), and Production (Prod). As a consultant, I can also be contributing to multiple projects. It’s rare that a project will use the same dev software as another, let alone versions. Now I have multiple development environments to maintain as well, and rather than carrying around 3 laptops, I use Virtual Machines to segregate my environments.
It works, but there’s a lot of overhead. I have one Virtual Machine (VM) image that I use exclusively for writing software, nothing else and its Windows directory is currently taking up 20GB. Add in Program Files, and that’s another 15GB. That’s 35GB of space needed to write a C# application. My deployment platforms for CI, QA, and Prod will face the same constraints. For Dev, and maybe CI, even with Terabyte sized hard drives being common these days, the logistics of regularly syncing VM images that are 10’s of GB’s each among your team is logistically cumbersome. For QA and Prod, Azure certainly helps simplify and centralize this process through its Azure Web Apps. Now however, we have a different problem, in that my QA and Prod environments aren’t in parity with Dev and CI. It’s a very real issue that developers encounter frequently, when something “works on my machine,” but breaks when it is deployed. Then there is the issue of performance. A virtual machine is slower than bare metal for obvious reasons. That means a lot of memory and CPU resources in an environment are spent virtualizing an entire computer.
With .NET Core, things become easier and more efficient. One increasingly popular Operating System (OS) replacement for software development is Linux. Like .NET Core, Linux is highly modular, allowing you to install only what you need. As a result, stock Linux distributions can take as little space as 100MB, even less. CPU and memory allocation are respectively small, especially when running without a Graphical User Interface (GUI) desktop environment. Microsoft has just recently introduced their similar take on a small, modular, operating system called Nano Server, which I’ll refer to as Nano. Nano comes in initially around 500MB. As equally important to Linux and Nano’s small footprint, these OSes natively support a powerful type of virtualization called container-based virtualization, one popular implementation being Docker. Resources are shared with the host, so images are much smaller and have much less overhead. A container could be instantiated (i.e. booting up a virtual machine) and thrown away in the fraction of a time it takes to boot a VM. While my computer would come to a grinding halt if I attempted to boot 3 VM’s at once, my machine would have no problem running 10’s of containers. These qualities make containers easily disposable, which make provisioning a snap, and scaling lightweight (real cheap). For example, I no longer need an expensive CI solution. Once I’m done coding a feature, I could merge the master branch with it, spin up the exact containers used in production on my local machine, deploy the solution to those containers, run integration tests against it, and tear it all down. That’s done 100% locally, and probably quicker than it would take even for CI resources to free up and begin a code build. Not to mention, it’s all done with parity to the production environment, a very important goal of DevOps which I’ll touch upon next. In short, Dev and Ops are now using the same products, and testing against the same environments (albeit in different places), and in a manner that requires less overhead and resources for projects where DevOps is critical.
Note that Windows can use a technology called Hyper-V that allows for more optimal performance when running multiple Windows VM’s on one machine. However, this technology requires specific hardware support, and several practices for best performance. Containers require no specific hardware support, and while there are always best practices for everything, performance is excellent right out of the box.
Why is parity important? Every developer has said (some say it weekly): “Weird, it worked on my machine earlier”. After deploying to QA, or worse, production, a bug is revealed due to even the most subtle environmental difference between the developer’s environment and the deployed environment. Containers make achieving parity practical. Using container technology, such as Docker, means you only have to maintain an image, not an environment. Yes, one could maintain a VM image for CI, QA, and Prod, and update these environments regularly – but now you are maintaining environments. Worse, if CI is on a VM, and QA and Prod are hosted on an Azure website, then you no longer have parity between the environments. With containers, Ops maintain a production image intended for a production environment. That image is shared with developers, testers, and quality assurance, as well as the script that can spin up an entire production eco-system, test it, and tear it down. Everyone is working with the same environment as production. Now there is less risk breaking production and less time consumed hunting down subtle issues. This could theoretically be achieved with VM’s, but their size and performance make this impractical.
Previously, the two main costs were purchasing a Windows license (to run .NET on), and the second a Visual Studio license to develop with commercially. Maybe for small teams, this isn’t much of an overhead or startup cost. For larger teams and startups, it can be expensive developing with .NET. A team needs zero money for licenses in order to use and develop with .NET Core for commercial purposes when choosing the Linux OS. For Windows OS, you will unfortunately still need to pay for Windows 10 Professional, Enterprise, or Server 2016 licenses (if you don’t already have them) to begin developing with containers.
The cloud is another matter. Windows Server instances are more expensive in the cloud, in some cases twice as expensive as Linux. One reason is that it costs extra for Windows licensing (perhaps except for Azure since it is owned by Microsoft). Licenses aside, it’s not an apples to apples price comparison to say Linux cost 1.3 cents per hour while Windows costs 2.3 cents per hour for an AWS t2.micro. The other cost is performance. That t2.micro instance is your old computer collecting dust in storage. Install Windows Server on it, and it may slug along, assuming it slugs at all. Install the latest Linux on it, and it’ll run like new, even with a desktop environment (Nano does not have a desktop environment and size shoots up to > 10GB or so if one is needed). So, unfortunately for Windows Server, you may need an even higher (i.e. pricier) tier to achieve the same performance of Linux running at a lower tier. Azure will likely provide on-par cost for Nano Server. However, I think it remains to be seen how Nano Server will reduce cloud computing costs for other cloud providers. With .NET Core you can reduce your cloud cost by using Linux. That’s a great option that didn’t exist before.
Microsoft packages up .NET Core and Visual Studio (VS) Code under the Microsoft license. This license allows you to use the software for any purposes, but has usual clauses such as protecting them from liability. MIT (as in Massachusetts Institute of Technology) licenses are also available for both of these products in their GitHub repositories. This means if you want to build these products from scratch, the MIT license applies. To me, either of these licenses is an improvement. I’ve been in a scenario where I had to use trial versions of Windows and Visual Studio due to issues with my MSDN account, which took Microsoft and our license handler weeks to resolve. Developers like to solve problems with software. Licensing is a legal matter that tends to shy developers away from using that software or being innovative with it. For example, if I’d like to create a base development environment in the form of a virtual machine and use it across many projects, but I’m unsure if I’d be violating any licensing agreements by doing that – then I’m probably not going to do it.
Unfortunately for Nano Server, it looks like Nano is bundled with Windows Server 2016 as a deployment packaging tool and unfortunately, for commercial Windows Container support, a licensed Windows OS such as Windows 10 Professional, Enterprise, and Server is required. If you are looking to avoid strict licensing dependencies for your project, then a Linux OS with .NET Core is the way to go.
I’ve set up CI VM’s in the cloud, only to learn that these environments need Visual Studio to be installed in order to do a build. Installing Msbuild.exe was not enough. With .NET Core, there are no “gotcha” manual type dependencies that need to be installed. I can provision a Docker image using a native install script, with no manual steps in-between. I can write a script to set up or spin up an entire production ecosystem from scratch. Tools like Puppet and Chef are provisioning tools that simplify this process for operating systems like Windows 10 and Server – so, it can be done. But, with an automation-friendly OS like Linux, you can avoid the learning curve and costs associated with these tools by using the native shell or your favorite scripting language.
What about Visual Studio?
Visual Studio, an Integrated Development Environment (IDE), will not run in any other environment than Windows. To IDE or not to IDE can be like having a religious debate. For those that cannot live without Visual Studio, then Windows it is! For those that generally prefer a code editor to an IDE, Microsoft has created a wonderful, cross-platform, open source editor called Visual Studio Code. VS Code is a code editor first. Although it could probably be make-shifted into an IDE through use of extensions, it’s goal is to make you more efficient at writing (like Visual Studio also does). Scaffolding, package management, compiling, testing, SSDT, server explorers, and so on, are all possible through the use of extensions, but are secondary to writing code. This falls in line with the Linux (originally UNIX) philosophy of “make each program do one thing well”. Because of this, VS Code is lightweight and fast, and gets out of the way of programs that do those other things better. As a Linux, Emacs, and Atom enthusiast, VS Code may be my new favorite editor.
.NET Core is available today as a stable release. It’s important to note, though, that it won’t be until maybe mid-2017 before .NET Core is backwards compatible with the .NET Standard 2.0. Seasoned .NET developers and new and existing projects may want to hold off on project development using .NET Core until .NET Core’s metamorphosis is complete. But, when it is, take advantage of all it has to offer. It will reduce cost, simplify licensing, and simplify logistics for development, quality assurance, and operations.