Linux Server Configuration

  1. Ubuntu
  2. Nginx
  3. UFW
  4. SSL

Server User Preparation

When accessing a server for the first time, login is usually performed via the root user. However, for security and operational sustainability, using the root user directly for daily operations is not recommended. Therefore, as a first step, a separate authorized user is created, and administrative privileges are delegated to this user.

Updating the Package List

Checks the most up-to-date addresses and versions of the software to be installed. This ensures the server knows where each piece of software is located.

bash
sudo apt update
sudo apt upgrade -y

Creating a New User

After logging into the server as root, a new user is created with the following command. The goal is to isolate the root user and create a safer, more controlled workspace for system operations.

  • A password is set for the user
  • The home directory (/home/master) is automatically created
  • Basic user settings are defined
bash
sudo adduser master

Granting Sudo Privileges

The created user must be granted sudo privileges to perform administrative tasks on the system. This allows the user to run commands requiring root privileges in a controlled manner.

  • The user is added to the sudo group
  • System management commands become executable with sudo
  • The need for direct root login is eliminated
bash
usermod -aG sudo master

SSH Key Preparation on Local Machine

These operations are performed on your local computer's terminal. The goal is to ensure secure access to the server using SSH keys instead of passwords.

Generating an SSH Key

This command generates an SSH key to be used for connecting to the server. The generated key acts as a unique digital identity for the user and eliminates password usage.

  • A secure key is generated using the ED25519 algorithm
  • A custom name is given to the key using the -f parameter
  • Prevents overwriting default SSH keys
  • An optional passphrase can be defined for the key
bash
ssh-keygen -t ed25519 -C "example@gmail.com" -f ~/.ssh/special_vps_key

Deploying the Key to the Server

The generated public key is transferred to the server with this command. After this process, the server recognizes the key and allows SSH connections made with it.

  • The public key is added to the authorized_keys file on the server
  • Eliminates the need for SSH password login
  • Provides a more secure and faster connection method
bash
ssh-copy-id -i ~/.ssh/special_vps_key.pub master@111.111.111.111

Editing the SSH Config File

This file is the configuration file used on the client side for SSH connections. Each Host entry contains connection settings for a specific server.

bash
nano ~/.ssh/config

Config File Settings

With this configuration, a short alias is assigned to the server. Now, you can connect simply by using the ssh myserver command in the terminal.

  • Host myserver is the short alias for the server
  • HostName specifies the IP address of the server
  • User defines the user to connect with
  • IdentityFile points to the path of the SSH key to be used
  • AddKeysToAgent yes ensures the key is added to the SSH agent
  • UseKeychain yes allows the passphrase to be stored in the macOS Keychain
bash
Host myserver
  HostName 111.111.111.111
  User master
  IdentityFile ~/.ssh/myserver_vps_key
  AddKeysToAgent yes
  UseKeychain yes

Hardening Server Access

This stage is the most critical step for server security. These actions are performed inside the server after logging in with the SSH key. Steps must be followed carefully as errors can completely block server access.

Editing SSH Settings

This file is the main configuration for the SSH service, which is the server's gateway to the outside world. Changes here determine who can connect and by which methods. Modify the following settings to neutralize brute-force attacks.

  • PermitRootLogin no Completely prevents the root user from logging in directly via SSH.
  • PasswordAuthentication no Disables password-based SSH login. Only devices with authorized SSH keys can now access the server.
bash
sudo nano /etc/ssh/sshd_config

Restarting the SSH Service

The SSH service is restarted to apply the changes. After this step, the server will only accept connections via SSH keys.

  • The root account is completely hidden from external access
  • Password login attempts are blocked
  • The server's attack surface is minimized
bash
sudo systemctl restart ssh

Firewall (UFW) Setup

In this stage, the server's network ports are brought under control. UFW (Uncomplicated Firewall) ensures only permitted services can access the server and blocks all unauthorized connections.

Opening the SSH Port

The SSH port must be open to maintain remote access. This rule must be added before enabling the firewall; otherwise, access to the server may be lost entirely.

bash
sudo ufw allow ssh

Opening Web Ports (HTTP / HTTPS)

These rules allow websites to broadcast via ports 80 (HTTP) and 443 (HTTPS). These ports must be open for visitors to access the site.

bash
sudo ufw allow http
sudo ufw allow https

Enabling the Firewall

All defined rules take effect, and the firewall is activated. From this point on, the server will only respond to connections from allowed ports.

bash
sudo ufw enable

Checking Firewall Status

This command checks if the firewall is active and which ports are open. An Active status indicates successful configuration.

  • All ports except SSH, HTTP, and HTTPS are closed
  • Unauthorized connection attempts are blocked
  • The server operates with a minimum attack surface
bash
sudo ufw status

Nginx Installation and System Refresh

In this stage, Nginx—the web service provider—is installed. The package list is updated before installation, Nginx is installed, and the system is rebooted if necessary.

Installing Nginx

Installs Nginx, one of the world's fastest web server softwares. The -y flag automatically answers "yes" to confirmation prompts.

bash
sudo apt install nginx -y

Rebooting the System

Necessary for the server to boot with the latest kernel version, especially after critical updates like Pending kernel upgrade. Your connection will drop; wait about 1 minute and reconnect using ssh myserver.

bash
sudo reboot

Verifying Nginx Status

Confirms Nginx is active (running) without errors. If successful, entering the server's IP in a browser will display the default Nginx landing page.

bash
sudo systemctl status nginx

Nginx Default Cleanup

In this stage, default configurations and sample files provided with Nginx are removed. The goal is to ensure only our defined rules and projects run on the server.

Deleting Default Site Configurations

Removes default site settings from both the archive and active directories. This prevents conflicts between the server and old or unnecessary rules.

bash
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default

Deleting the Default Web Directory

Completely deletes the web directory containing the Welcome to nginx page. No default content will be automatically published on the server anymore.

bash
sudo rm -rf /var/www/html

Restarting Nginx

Restarts the service to apply the cleanup. Nginx will now wait for new site configurations defined by us.

bash
sudo systemctl restart nginx

Project Structure and Permissions

A centralized, organized, and secure workspace is created for all Frontend and Backend projects. The goal is to separate projects from system files, prevent permission confusion, and build a sustainable server structure.

Creating the Main Project Directory

The main folder for all projects is created hierarchically under the secure /var/www directory. This will be the central workspace for all apps.

bash
sudo mkdir -p /var/www/apps

Transferring Directory Ownership

Ownership of the project directory is transferred to the master user. This eliminates the need for sudo during npm install , git clone , or file editing.

bash
sudo chown -R master:master /var/www/apps

Navigating to the Workspace

All future projects will be located in their own subfolders under this directory. All development and deployment operations are carried out within this workspace.

bash
cd /var/www/apps

Git Configuration and Identity

This ensures that all code changes on the server are clearly attributed to the correct user. Git configuration is critical for clean and professional logs in GitHub/GitLab integrations.

Verifying Git Installation

Checks if Git is installed on the system. It usually comes default with Ubuntu 24.04, but verifying is best practice.

bash
git --version

Setting the Username

Ensures all commit operations performed on the server are signed with your name.

bash
git config --global user.name "Sezer Gec"

Setting the Email Address

Sets the official email address to appear in Git logs. This should match your GitHub/GitLab profile.

bash
git config --global user.email "example@gmail.com"

Node.js (NVM) and PM2 Installation

NVM is installed to manage Node.js versions professionally. Then, Node.js v24 is installed along with PM2 to ensure applications run stably in the background.

Installing NVM (Node Version Manager)

The healthiest method for managing Node.js versions. Uses the script recommended by the official NVM project.

bash
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash

Activating NVM

Makes NVM commands available in the current session without needing to restart the terminal.

bash
. "$HOME/.nvm/nvm.sh"

Installing Node.js v24

The current major version Node.js v24.x and the modern npm v11.x are included in the system.

bash
nvm install 24
node -v
npm -v

Installing PM2

A process manager that handles Node.js apps in the background with auto-restart and logging features. Sudo is not required since NVM is used.

bash
npm install -g pm2

PM2 Startup Configuration

Defines necessary permissions for PM2 processes to start automatically upon server reboot. After the command, run the line starting with sudo env PATH=... provided by the terminal.

bash
pm2 startup

Running Node.js Apps with PM2

This command runs your application under PM2 via npm start, names itexample-project, and manages its background execution.pm2 save records current processes to ensure they restart after a reboot.

bash
pm2 start npm --name "example-project" -- start
pm2 save

Nginx Reverse Proxy Host Creation

A dedicated host (server block) is defined in Nginx for the application. The goal is to route external HTTP requests through Nginx to the backend port securely. This is a fundamental requirement for production environments.

Security: Catch-All Host

In the sites-available directory, a file is created to silently drop unrecognized domain requests . This increases security by catching requests from unknown or malicious domains.

bash
sudo nano /etc/nginx/sites-available/000-catch-all

Reverse Proxy and Default Behavior

With this configuration, Nginx acts as the default server and silently closes HTTP (Port 80) requests from IPs or unrecognized domains. Your application will only be accessible through its own domain.

nginx
server {
    listen 80 default_server;
    server_name _;
    return 444;
}

Activating the Host via Symbolic Link

The configuration file is linked to the sites-enabled directory. Nginx only runs files in this directory. This approach allows for quick enabling/disabling without deleting configurations.

bash
sudo ln -s /etc/nginx/sites-available/000-catch-all /etc/nginx/sites-enabled/

Creating a New Nginx Host Config File

A project-specific file is created in sites-available. This is the primary center for defining all behaviors for the respective domain or service.

bash
sudo nano /etc/nginx/sites-available/example-site

Configuring Reverse Proxy Settings

This configuration directs external HTTP (Port 80) requests to the Node.js application (Port 3000) running in the background.

nginx
server {
    listen 80;
    server_name example.com;
}

Activating the Project Host

Links the configuration to sites-enabled to make the site live.

bash
sudo ln -s /etc/nginx/sites-available/example-site /etc/nginx/sites-enabled/

Creating a Global Config Structure

This setup includes global proxy and security settings used across all apps. A central config file is prepared and can be included in all Nginx hosts, ensuring HTTPS redirection and security headers are automatically applied.

bash
sudo mkdir -p /etc/nginx/config
sudo nano /etc/nginx/config/proxy_settings.conf

Defining Global Proxy Settings

The proxy_settings.conf file contains all global proxy and security headers. This ensures your app runs securely, handles SSL traffic centrally, and applies necessary security headers.

bash
proxy_http_version 1.1;

# Header settings
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Origin $http_origin;
proxy_set_header Content-Type $content_type;

# Cache and security
proxy_cache_bypass $http_upgrade;
proxy_redirect off;

# Additional security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;

Testing Configuration and Restarting Nginx

The nginx -t command checks for syntax errors. If successful, Nginx is restarted to apply new settings.

bash
sudo nginx -t
sudo systemctl restart nginx

Free SSL (HTTPS) Setup with Certbot

API services are secured over HTTPS using free SSL certificates from Let’s Encrypt. Certbot automatically detects Nginx configurations and manages background renewal. HTTPS is considered mandatory for mobile apps and production environments.

Installing Certbot

Installs the Certbot tool and Nginx integration package. This eliminates manual certificate and configuration tasks.

bash
sudo apt update
sudo apt install certbot python3-certbot-nginx -y

Obtaining an SSL Certificate

This command generates an SSL certificate via Let’s Encrypt and saves it under /etc/letsencrypt/live/example.com/. It does not modify Nginx or deploy automatically.

bash
sudo certbot certonly --manual -d example.com

Editing the Nginx Host File

This configuration runs your app securely over HTTPS, redirects HTTP requests to HTTPS, and manages SSL certificates alongside proxy settings.

bash
server {
 listen 80;
 server_name example.com;
 return 301 https://example.com$request_uri;
}

server {
 listen 443 ssl;
 server_name example.com;
 
 ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
 include /etc/letsencrypt/options-ssl-nginx.conf;
 ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
 
 location / {
    proxy_pass http://localhost:3000;
    include /etc/nginx/config/proxy_settings.conf;
 }
}
    

SSL Renewal Test

Tests the automatic renewal mechanism with a dry run. It doesn't perform a real renewal but ensures everything is set up correctly for when the certificate expires.

bash
sudo certbot renew --dry-run

Verifying Domain Configuration

Used to locate where the domain is referenced within Nginx or Let’s Encrypt directories, helpful for troubleshooting or identifying old certificates.

bash
sudo grep -R "example.com" /etc/nginx /etc/letsencrypt -n

Deleting a Certificate (Optional)

Completely removes a certificate and reloads Nginx to clean up the configuration.

bash
sudo certbot delete --cert-name example.com --non-interactive
sudo nginx -t
sudo systemctl reload nginx
On This Page
copyright © 2026 - sezergec.dev