by Devin Yang
(This article was automatically translated.)

Published - 6 years ago ( Updated - 6 years ago )

foreword

Hear about the evolution of my network.
My GCP costs $46.35 per month, and the current free trial balance is $111.05,
I've almost spent $300, and I'm about to move back to a self-managed host.
 

Detailed history

Thirteen years ago, the company at that time provided free system management courses, so I learned quite a lot of network-related certification courses, and all of them used original textbooks.
Like RedHat's RHCE, Microsoft's MCSE, Cisco's CCNA... etc., because I prefer Linux, I started to learn how to set up my own website.
First, set up a Linux host at home, and studied different host settings:
Apply for your own domain name, use Bind to set DNS service, Samba, DHCP Server, Mail Server, Apache Server and https, LDAP, OpenSwan (ipsec vpn), iptables (firewall), sendmail, radius, played with ESXi...etc., and the computer Then put it at home and turn it on 24 hours a day.
Therefore, when the network is disconnected, or once in two or three years, the hard disk fails, and the disk is repaired and the system is restored.

Due to some service settings on the main Linux, I have almost learned it. I have been playing Linux for more than ten years. I don’t want to put the host at home.
I just want to outsource all the work to him, because I am interested in it, and the usage is not large, so the mail part is changed to the free ZOHO hosting.
The domain name is already paid for. My domain name registrar is Networksolutions, so I directly changed it to the registrar for hosting.

Before outsourcing, there were two self-managed Name Servers. One Linux host at home, how could there be two Name Servers?
Because about six years ago, at work, I just came into contact with ESXi, so I also entered the private cloud period,
So I also use VMWare's ESXi at home (several VMs are installed, including two DNS hosts of course).

But as long as there is a host at home, it means that hardware maintenance may be required. After all, it is only a PC, and it was replaced after an upgrade of my home computer.
In addition, I felt that the fan was too noisy and I would put it on the balcony, where it would be exposed to the sun and rain, so some hardware failures often occurred in two or three years.

So I consider saving the electricity bills and hardware costs I saved, and then give some more money to host my website with an outside manufacturer.
(I have encountered a hard disk failure, all the data was destroyed, and I made a disk array, and then more and more hard disks were installed, and the disk array was broken @@, all in all, it is expensive and troublesome).

So I started looking for a web hosting company. About a year and a half ago, I chose BlueHost and started web hosting.
The reason is that it is cheaper (about 600 a month), provides ssh, and mainly wants to research and write some NodeJs programs.
But this company connects from Taiwan, it’s really too slow, and the host is in the United States, and it’s not a VPS host, and I don’t have root privileges, and I can’t adjust the time zone of mysql (because it’s shared) when looking for customer service.
It took more than a year. After I paid the three-year payment for the second time, I felt that he changed slowly. What’s the matter? I paid the compensation (because I paid $719.64 for three years, it took about half a year),
Switch to GCP, why not AWS, because I have already used up the free trial.

The trial price of GCP is $300, so it offsets some of my very blue money, and the compute engine service on GCP is very satisfying to use,
The speed is also extremely fast, because it is a VM, so it also has full root authority, set up public key authentication, install Docker, there is no problem at all, and I am very happy to use it.

But after all, interest cannot be eaten as a meal. The current cost is about 1300 Taiwan dollars a month.
Because I want to engage in video conferencing system-related applications, I think the usage will definitely exceed NT$2,000 or more. Is there a cheaper way?

Before we start, let’s take a look at my network change history. In the first ten years ago, beginners learning Linux needed a fixed IP, so they jumped from Hi-Net’s floating ADSL. SeedNet applied for enterprise type and uploaded 4MB Three fixed IP plans, until two years after switching to BlueHost, there is no need for a fixed IP, so the SeedNet network (which has been merged by Far EasTone) was withdrawn, and cable TV broadband was used for Internet access.

So the year before last, it was premeditated, and the home network was first routed to cable TV broadband, and then switched back to the HiNet home-type optical generation (100M/40M), and a fixed IP can be selected.
This is also my current situation, but enough is enough. Anyway, DNS and email are hosted, and one IP can be set up through the Name-based method, as many as there are.

It is said that the NAS used by the company at work is very easy to use. Although the company's NAS is the king of machines, a hard disk will fail every now and then, and there will be bad tracks.
In the end, it was sent for repair due to automatic shutdown, but I really gained experience in repairing it. I have a very high degree of mastery of this Nas now.

I really like the settings on the Nas side, such as scheduling or automatic renewal of certificates, SSH, etc., and more importantly, it supports Docker.
So I built one myself, I bought 5-Bay, and all of them use enterprise hard drives (higher durability).

This time, after the free quota of GCE is used up, it is estimated that before May 2018, I will move the website back to my Nas at home.
Due to the use of the Docker environment, there is no need to worry about whether the settings on GCE will not run after arriving on Nas.

Ha, as a result, in the past ten years, I have gone around and returned to setting up my own Web Server at home.
Still using the Hi-Net network, is this evolution or degradation?

In fact, there are still changes. The DNS and Mail Server are currently being moved out. The DNS may be overwhelmed and then moved back to self-management.
After all, domain name registrars can only provide some basic settings, and some advanced settings, such as restricting queries, or controlling subdomains through programs written by themselves, etc.
It seems to be more convenient to manage by yourself.


evolution:

That is to say, a single Linux host and an external network card are first bound to two external network IPs (DNS will require two IPs, primary master and secondary master),
Up to now, Docker has been used to run Linux.

advantage:

After switching to Docker, at least I don’t need to reinstall the operating system if there is a problem. Currently, the NAS uses Raid 6. Unless the entire NAS fails, it should basically be stable.
If there is a hard disk failure, there is no need to shut down the computer at all, and it can be restored directly by hot extraction (this one has support), which saves power compared to the website I set up with a PC before, and saves money than I used GCP.

shortcoming:

The CPU is poor, but it should be acceptable if it is just running a general website.
 

History changes:

PC (Linux Box) => Private Cloud (ESXi) => Public Cloud (GCP) => NAS (Container Environment).






 

Tags: docker

Devin Yang

Feel free to ask me, if you don't get it.:)

No Comment

Post your comment

Login is required to leave comments

Similar Stories


docker,dlaravel

D-Laravel 1.5.5 Change Notes

D-Laravel's fpm image was rebuilt using the official dockerfile of docker php before php 7.2.1. And so I can specify that the default owner of fpm is dlaravel, --with-fpm-user=USER Set the user for php-fpm to run as. (default: nobody) --with-fpm-group=GRP Set the group for php-fpm to run as.

docker,laravel

[D-Laravel]./console node

When developing Laravel, sometimes we need to install nodejs packages through npm, but Node in our system is not new enough. It may be impossible to upgrade due to some factors, such as running an old version of nodejs program, etc. In fact, we can use docker through simple commands, so that we can use the latest version of node image to mount the /sites folder on the host side. In this way, we can execute the new version of the npm command at any time.

d-laravel, docker, docker-compose, laravel

D-Laravel released v0.9.1 version

In order to keep D-Laravel in an operational version and a stable version. Start tagging this version Pass those tests.. This version has passed the ubuntu real and macos real machine tests, and the Container can be successfully established and executed..