Tech-Guide

What is a Server? A Tech Guide by GIGABYTE

by GIGABYTE
In the modern age, we enjoy an incredible amount of computing power—not because of any device that we own, but because of the servers we are connected to. They handle all our myriad requests, whether it is to send an email, play a game, or find a restaurant. They are the inventions that make our intrinsically connected age of digital information possible. But what, exactly, is a server? GIGABYTE Technology, an industry leader in high-performance servers, presents our latest Tech Guide. We delve into what a server is, how it works, and what exciting new breakthroughs GIGABYTE has made in the field of server solutions.

1. What is a Server?
At its heart, a server is a computer—not a personal computer like the kind we have at home or in the office, but a specialized piece of equipment, designed to provide specific functions and services for other computers. Hence the name: server.

Servers are used because no single computer can be expected to fulfill every role and perform every function. By allocating the task to a specialized server on the network, a multitude of users can access an enormous number of services in a reliable, sustainable, and cost-effective manner. The process by which a user sends a request through their own device (called a “client”) to a server is called the request-response or request-reply model; it is the basis of the modern client-server IT architecture. Basically, this is what happens every time you read an online article, check social media, stream a movie, or order food delivery.

As the resources and services provided by servers become more specialized, the types of servers available on the market have become more diverse. To list just a few examples, a cloud computing server may be loaded with powerful CPUs and GPUs to provide high performance computing (HPC) and heterogeneous computing services; a media server may store a vast library of audio and video content and stream them for a wider audience to enjoy; a game server may host multiplayer video games for thousands of players. Clients are usually connected to servers through the internet; however, a secure, private intranet can be used to provide exclusive services for a select group of clients.

Glossary:
What is Cloud Computing?
What is HPC?
What is Heterogeneous Computing?

Because servers are so vital to the modern IT infrastructure, they are often housed in server rooms or data centers. Usually, a complete array of peripheral subsystems is installed to support the servers, providing temperature control, fire suppression, and other functions. A lot of progress has been made in the design of data centers—especially in the field of cooling, which has witnessed the implementation of radical new methods such as liquid or immersion cooling. These innovative systems ensure servers can offer top-notch performance while maintaining stable operations.

Learn More:
How to Build Your Data Center with GIGABYTE? A Free Downloadable Tech Guide
GIGABYTE Tech Guide: How to Pick a Cooling Solution for Your Servers?
《Glossary: What is Liquid Cooling?
《Glossary: What is Immersion Cooling?
 
Another advancement in server technology is using software instead of hardware to act as a server. This is called a “virtual server” or “virtual machine”. It is an important part of the software-defined IT infrastructure known as a Hyper-Converged Infrastructure (HCI). The benefit of virtualization is that multiple servers can be hosted on a single computer, resulting in better cost-efficiency. A piece of software called a hypervisor is used to manage virtual servers running on the same machine, while a virtual switch is used to facilitate communication between virtual servers.

2. How Does a Server Work?
We’ve mentioned that the request-response model is the basis of the modern client-server IT architecture. Let’s go a little deeper into how it works, and why it’s different from other communication models.

In the request-response model, the client must initiate the dialogue by submitting a request for resources (such as files) or services (such as navigation to the nearest gas station) to the network. A server on the network that has been configured to listen for such requests will take up the task, so to speak, and find the data it needs to provide a satisfactory response. The faster the server can find the required data, the faster the client can get a response to their request.

Obviously, a number of criteria must be met for a server to field requests from a multitude of different devices. First and foremost, the client and server must speak the same computing “language”—that is, they must follow a common communications protocol. Second, there may be a verification process to make sure the client has permission to access the resources or services they are requesting. A scheduling system is usually utilized to prioritize requests when a server is tasked with more clients than it can handle simultaneously. To improve availability, a server may be designed to spend only a limited amount of time on each request. Finally, the kinds of requests a server can respond to are usually designated in its operating system; but an additional application, such as the popular Apache HTTP Server software for providing web browser services, may be installed on top of the operating system to expand the types of requests the server can handle.

Let’s use one of the most common examples to illustrate this process. Here’s what happens every time you browse the website of your favorite tech company: GIGABYTE, at www.gigabyte.com:
1. Your web browser initiates communication by making a request for the web page via HTTP, which is the network protocol for distributing web pages.
2. A web server at GIGABYTE accepts the request and compiles the data necessary to display the web page—including multimedia content, press releases, product pages, and interesting in-depth articles on the exclusive Insight platform.
3. The web server responds to your request by returning all the data to your web browser, which displays the GIGABYTE web page you asked to see.
In the standard request-response model, which is the foundation of the modern client-server architecture, a client device initiates communication by making a request on the network. A server picks up the request and provides the appropriate response, thus completing the dialogue.
A powerful server supported by excellent optimization choices is capable of responding to requests much more quickly. For example, in the case of the web server, reducing the file size of the website’s multimedia content will help the web page load faster. Optimization improves the availability of services and reduces the risk of the server crashing due to too many incoming requests—as sometimes happens to ticket booking websites, when vacationers swarm online to purchase tickets for the holidays.

Before we go on to the next section, let’s take a quick look at other communication models beside the request-response model. The peer-to-peer model, which was popularized by file-sharing services, differs from the request-response model in that each device on the network is an equal participant that may both request and provide resources. In other words, each client is also a server, capable of responding to other clients’ requests with data or services. The one-way communication model used by message transfer agents, such as mail servers, is different in the sense that the client sends a message (e.g., an email) without waiting for a response. Therefore, the computing, memory, and storage requirements of such a server is not as exacting as a server that is expected to provide responses in a timely manner.
3. What are the Different Types of Servers?
Now that we have a basic understanding of what a server is and how it works, let’s look at the different types of servers we use in our daily lives. This is relevant to the next section, where we will talk about how server designs have evolved over time. The logic is simple: as with all things, necessity is the mother of invention, and the way we use servers has a profound influence on how we design them.

How do we use servers? Let us count the ways. Listed alphabetically, here are just ten types of servers we use regularly in the modern digital world:

- Application servers
Application servers host and run browser-based computer programs (“applications”) in lieu of clients installing and running a copy on their own devices. In this way, clients can benefit from a variety of programs, so long as they have a web browser and can connect to the network.

- Computing servers
Also referred to as cloud computing servers, these powerful machines offer far better processing power and memory capacity than any client device. Computing servers have become the backbone of the modern world of AI and IoT, as even the most mundane tasks can tap into a wellspring of computational prowess on the level of HPC.

Glossary:
What is Artificial Intelligence?
What is IoT?

- Database servers
Simply put, database servers store and maintain our vast sea of digital data. Not only do these servers possess an incredible amount of disk space, the data must be readily accessible to multiple clients. Therefore, database management systems are usually built around a common computer language, such as SQL (Structured Query Language).

- DNS servers
Sometimes called a directory server or a name server, DNS (Domain Name System) servers provide a deceptively simple function: they “translate” the domain names used by humans—for example, a company name like “GIGABYTE”—to the IP addresses used by machines. In other words, clients do not need to store or memorize IP addresses to find the correct domain; the servers know exactly what they are looking for.

- File servers
As the name suggests, file servers do not normally handle computing tasks. Instead, they focus on storing and distributing files. Faster read and write speeds are essential to ensure clients can upload and download files efficiently. Innovative storage technologies, such as NAS (Network-Attached Storage) and Software-Defined Storage, can also improve functionality.

Glossary:
What is NAS?
What is Software-Defined Storage?

- Game servers
Commonly referred to as a “host”, game servers allow players to interact in a shared virtual online world.
 
- Mail servers
As previously stated, mail servers use a one-way communication model that’s more simplified than the request-response model. Every time you send an email, the mail server holds the letter for you until the recipient checks their inbox, at which point the letter is forwarded to them. Mail servers make it possible for us to receive emails without being connected to the network all the time.

- Media servers
An essential component of modern media streaming platforms, media servers store and stream digital video and audio content.

- Proxy servers
A proxy server serves as the intermediary between a client device and another server on the network. Both the client’s request and the second server’s response are transmitted through the proxy. This is usually done to enhance security; however, it can also boost performance by routing traffic more efficiently—a must in a large and complicated network.

- Web servers
This is one of the most common types of servers; chances are, you are reading this article on a web server right now. As illustrated in the previous section, a web server fulfills the client’s request for a web page by first compiling the data, and then sending the data to the client via HTTP. The client’s web browser uses the data to display the webpage. The entire World Wide Web as we know it was made possible by the creation of web servers.

If the list of different server types has left you feeling a bit disoriented, do not fret—the main takeaway is that servers are becoming more specialized in their functions, and new server types are constantly being invented to provide more advanced resources and services. In the next section, we will look at how server design has evolved, and why specialized rack-mounted servers are currently the predominant choice in the world of IT.


4. How Have Servers Changed Over Time?
The way we design servers has been an interesting case of “there and back again”. That is to say, right now we are at a phase where computational resources are becoming more centralized again, just like it was at the beginning. But it has not always been this way. Different kinds of servers have appeared and disappeared over time, mainly due to the technology that was available at the time.

Back in the sixties, when the modern IT architecture was just being established, the mainstream “server” product was the mainframe computer—hulking, heavy computers the size of large refrigerators. Mainframes had better computing power and reliability than just about anything else with a microchip. The client devices used to connect to mainframes were called “dumb terminals”; those were computers with such a limited amount of processing power and features, they were little more than a monitor attached to a mouse and keyboard. All the computing was done on the mainframe, because that’s where all the computational resources were.

This began to change as manufacturing methods improved and Moore’s Law became the rule of thumb. Suddenly, it was possible to put more computing power on smaller chips at a lower price. In server history, this was the mass distribution phase, when basically any high-end personal computer could function as a server. Mainframes still existed—in fact, they exist to this day—but dumb terminals were no longer a necessity. If you owned a computer, you possessed enough processing power to handle just about any task you threw at it.
Mainframes used to house all the processing power, until advances in manufacturing made it possible for normal computers to operate independently. Now, technological advancements have caused computational resources to become centralized again. The difference is, they have taken the form of specialized servers, designed to provide resources or services for wirelessly connected client devices.
The mass distribution model was not without its flaws, of course. Upkeep of each individual computer was high, and a lot of processing power ended up being wasted on under-used client devices. However, processors were so cheap and powerful, this trend continued for quite some time. You may have heard it before, but it’s worth repeating: the computing power on your smartphone is leaps and bounds ahead of the NASA computers that helped astronauts land on the Moon. The mobile devices we mainly use for selfies could have been used as servers, not so long ago.

A number of changes has brought computational resources back to a centralized location. For one thing, advances in manufacturing methods cannot be expected to continue indefinitely. There are already signs Moore’s Law may soon cease to apply. This means computational tasks will eventually outpace the capability of client devices. The second reason is that improvements in information and communications technology, such as the advent of 5G communications, have made it possible for clients to connect seamlessly with powerful servers. Last but not least, the standardization of server designs means that machines owned by different companies can be housed together in a colocation (“colo”) center, such as a massive data center, to further reduce costs and enhance security. It makes sense, then, for servers to return to one central location—taking us back to where we started.

Learn More:
《Glossary: What is 5G?

Let’s look at these standardized server designs. Currently, there are two mainstream approaches. The first is the rack-mounted server, designed to fit on server racks—tall, self-contained metal cabinets that can house multiple servers. The dimensions of these rack-mounted servers follow a specific unit of measurement, called a “rack unit” (abbreviated as “RU” or “U”). Servers mostly come in sizes ranging from 1U to 5U. This helps to make sure servers built by different manufacturers can fit on the same rack, making it easier for IT managers to keep all the computational resources in one centralized location.《Glossary: What is Rack Unit?

The second approach is to take the concept of consolidating all the computational resources a step further and strip servers down to their minimal components. These are commonly known as blade servers: thin, vertical frames with the peripheral systems (such as power supply and cooling) removed to minimize the physical footprint. Blade servers are housed in a blade enclosure, which provides a common source of the supporting systems that were removed—power, cooling, etc. The blade enclosure itself may be built according to the standards of rack units, so it can also be mounted inside a server rack.

There are advantages and drawbacks to both approaches. Blade servers occupy the least amount of physical space, and operating costs are reduced to a minimum because peripheral systems are shared. However, they lack the independence and expandability of rack-mounted servers. Temperature control may also be an issue, because components are packed so tightly together. On the other hand, a rack-mounted server is far more versatile, with the flexibility to add more processing power, storage devices, or networking interfaces as required. The “architecture” or “layout” of a rack-mounted server can be designed from the start to serve a specific function, such as parallel computing, edge computing, or file storage. Needless to say, manufacturers also make every effort to put as much computational resources as possible inside the chassis; the ideal is to create an optimized high-density design that balances performance with stability.
5. GIGABYTE’s Rack-Mounted Server Solutions
GIGABYTE’s forte is specialized rack-mounted servers that represent the pinnacle of server technology. They are widely recognized for three key attributes that we may call the “3 S’s of Server Solutions”:
 
Specialization –
From the selection of individual components to the composition of said components inside the chassis, GIGABYTE servers are designed from the get-go to be the best at what they do. There is a wide range of products suitable for different tasks, including H-Series for high-density computing, G-Series for GPU-related computing, S-Series for storage, E-Series for edge computing, and R-Series for general use. If you have an application in mind but don’t know which servers are best for you, you can contact a GIGABYTE sales representative at [email protected] for consultation.

Scalability –
As client requests grow in number and complexity, servers have to keep up in terms of capacity and ability—that’s what it means to be scalable. Fortunately, GIGABYTE servers are endowed with the flexibility to scale up or out as needed. There’s room for adding more resources, such as processors, memory, and storage. There’s also the option of linking with other nodes or servers to form a server farm or computing cluster to accommodate the ever-increasing workload.

Glossary:
What is Scalability?
What is Server Farm?
What is Computing Cluster?
 
Standardization –
Even though GIGABYTE servers soar above the competition in terms of specialization and scalability, they are still built according to the standard measurements of rack units. This is to ensure clients can install our products in their server rooms or data centers with ease. GIGABYTE servers range in size from 1U to 5U, with the most common form factors being 1U, 2U, or 4U. Some Edge Servers in the E-Series have a reduced chassis depth of just 449mm compared to the standard 800 to 860mm, allowing them to be installed in tighter spaces while maintaining structural integrity and temperature control.

GIGABYTE servers boast these attributes due to the years of research and development that have been poured into the proprietary server designs. There are seven primary components inside a server: motherboard, processors, memory, storage, I/O ports, power supply, and temperature control. By selecting components of the highest quality and making adjustments based on our in-depth know-how and accumulated insight into real-life user scenarios, GIGABYTE is able to offer industry-leading server solutions for clients working in different industries.

Motherboard –
Motherboards are called the heart of the server, and rightly so. It is a piece of printed circuit board (PCB) that connects all the electronic components together. The design of the motherboard determines what types of processors, memory, and hard drives can be fitted inside the server. It also provides expansion slots and I/O ports that can be utilized for scalability and networking purposes.

GIGABYTE builds our own server motherboards, which are available in various form factors and can provide a single CPU socket or dual sockets, depending on the processors used. The latest generation of high-speed computer expansion bus standards PCIe Gen 4.0 can transmit data between processors and other components in a blink of an eye.

The inside of every server looks different, but in this rough diagram, you can see the seven primary components of a rack-mounted server: motherboard, processors, memory, storage, I/O ports, power supply, and temperature control. GIGABYTE selects components of the highest quality and creates server solutions best suited for clients in different industries.
Processors –
If the server’s heart is the motherboard, then the processors must be the brain. There are two common types of processors—CPU (central processing unit) and GPU (graphics processing unit).

Every server needs a CPU to function. The mainstream options are Intel® Xeon® Scalable, AMD EPYC™, and the new Ampere® Altra® processors based on the ARM architecture. Clients usually select CPUs based on their core and thread count, scalability, compatibility with legacy equipment, power usage, etc. Virtual servers, which may be the next stage in the evolution of servers, also rely heavily on powerful CPUs capable of running multiple virtual machines simultaneously.

Learn More:
GIGABYTE’s Complete List of Intel® Xeon® Scalable Servers
GIGABYTE’s Complete List of AMD EPYC™ Servers
GIGABYTE’s Complete List of Ampere® Altra® Servers

GPUs are sometimes added to supplement CPUs, especially if the server routinely handles compute-intensive tasks. For instance, the development of AI through deep learning and machine learning can benefit greatly from powerful general-purpose GPUs (GPGPUs), such as NVIDIA® A100 Tensor Core GPUs. GIGABYTE’s G-Series GPU Servers are designed to support multiple GPU accelerators, making them the ideal products for workloads involving HPC, parallel computing, or heterogeneous computing.

Glossary:
What is Deep Learning?
What is Machine Learning?

Memory –
Our modern world is floating atop a veritable sea of data, but even the most advanced processors can only handle a chunk at a time. That data is temporarily kept in the server’s memory, more accurately called RAM (random access memory). DIMM slots are used to connect the memory modules to the motherboard. GIGABYTE servers use the latest RDIMM/LRDIMM DDR4 for better performance and lower power requirements.《Glossary: What is DIMM?

Storage –
All of the server’s data is kept in the storage device, which are usually hot-swappable hard drives or solid state drives. Many GIGABYTE servers support the groundbreaking NVMe interface, which can unlock the full potential of solid state drives by offering read/write speeds many times faster than the conventional SATA interface.《Glossary: What is NVMe?

I/O Ports –
A server can be connected to other devices through its I/O (input/output) ports, which are located at the front and back ends of the server. The most important I/O port is the LAN port, which links the server to the local area network (LAN). On some of GIGABYTE’s most advanced server solutions, the data transfer rate is a blazing-fast 10Gb/s. On some GIGABYTE servers, the location of the I/O ports is adjusted to accommodate special user scenarios. For example, E-Series Edge Servers place I/O ports at the front of the chassis for easier access and maintenance in confined spaces.

Glossary:
What is LAN Port?
What is LAN?

Power Supply –
How much power a server uses depends on the requirements of its components. Redundant power supply is usually installed to ensure availability. A voluntary certification program called 80 PLUS offers different levels of certification based on the energy efficiency of a server’s power supply.

Temperature Control –
Keeping the server’s temperature under control is key to providing excellent performance and stable operations. In a standard air-cooled server, an array of fans is installed inside the chassis to facilitate airflow. As mentioned earlier in this Tech Guide, innovative methods such as liquid cooling and even immersion cooling are catching on. How you choose the most suitable cooling solution ultimately depends on your energy consumption forecast, the availability of space and other resources, as well as the infrastructure of the facility where you house your servers.

We hope this Tech Guide has been able to explain what a server is, what role they play in our modern age of digital information, and how GIGABYTE can help if you’re planning on purchasing your own. Thank you for reading, and please feel free to contact our sales representatives at [email protected] for consultation.
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates