How to create your own Minecraft Server
1 Answer 0 Score |
Advice wanted on running servers |
Background Information
In computing, a Server is the designation of a piece of hardware on a network. The purpose of a Server is to bring additional functionality, or "services" to its clients. Servers can provide multiple services for their clients including sharing data, processing power, or computational resources among several clients. Some types of servers include database servers, file servers, mail servers, print servers, web servers, gaming servers, and application servers.
Frequently Asked Questions
What exactly is a server?
To put it simply, a server is just a computer. The word describes a role within a network and isn't tied to a specific piece of hardware. Now while the servers in a company's data center may look significantly different from a normal day-to-day desktop, this is because those servers might serve hundreds, if not, thousands of users on a day-to-day basis. As such, they are built with reliability taking priority as these devices may be mission-critical to the client's daily function.
That being said, any computer can be made into a functioning server. Anything from an old desktop PC, to a laptop, even a $35 Raspberry Pi can act as a server. Now, these devices won't get you the same performance, or reliability, as hardware that is specifically dedicated to serving thousands of clients, but for personal use, a regular PC can act as an inexpensive server for your home.
What's the difference between a desktop PC and a server?
While a standard desktop PC can act as a server, there are a few key differences in the hardware that make dedicated servers different from a standard desktop computer. As stated above, the servers that are in big data centers are built for reliability. As such most data center servers, feature hardware with more than one power supply. Now while these machines can most definitely run on just one of those power supplies, the purpose is to have reliability through redundancy. One power supply can be plugged into the wall and draw power from the building, the other power supply can be plugged into a UPS. This way if there is an instance where the building loses power, the UPS can kick on and power the server with minimal, to no downtime.
Another example would be the type of RAM that is used. Most dedicated servers use ECC RAM rather than DDR RAM. The key reason that ECC memory is used over DDR memory is that ECC features single-bit error detection and correction. While an error in a single bit might not seem like that big of an issue for normal computers, bit errors can result in file corruption, and system crashes, which can result in lost important data. ECC memory makes these machines significantly more reliable in storing data and preventing service interruptions by detecting and correcting any bit errors.
A final key difference is the CPUs that are used in servers. High-end servers will often take advantage of CPUs that can have anywhere between 8 and 32 cores but with slower clock speeds with single-thread performance. Often faster processors will require significantly more power for both processing and cooling. One thing that an organization needs to consider when running a server is the cost of electricity over a long period. The typical increase in power consumption can increase by 75% when switching from a 3 GHz CPU to a 4 GHz CPU. An argument can also be made that a faster CPU isn't necessary when you have a server with dozens of cores and threads, then any task that is thrown at the server is extremely parallel. This would mean that single thread performance, while a nice feature to have, is not the limiting concern. If losing 20% of performance per core means that 50% of the cores are seen by the system then it can be considered an overall net win.