When accessing a server for the first time, login is usually performed via the root user. However, for security and operational sustainability, using the root user directly for daily operations is not recommended. Therefore, as a first step, a separate authorized user is created, and administrative privileges are delegated to this user.
Checks the most up-to-date addresses and versions of the software to be installed. This ensures the server knows where each piece of software is located.
sudo apt update
sudo apt upgrade -yAfter logging into the server as root, a new user is created with the following command. The goal is to isolate the root user and create a safer, more controlled workspace for system operations.
sudo adduser masterThe created user must be granted sudo privileges to perform administrative tasks on the system. This allows the user to run commands requiring root privileges in a controlled manner.
usermod -aG sudo masterThese operations are performed on your local computer's terminal. The goal is to ensure secure access to the server using SSH keys instead of passwords.
This command generates an SSH key to be used for connecting to the server. The generated key acts as a unique digital identity for the user and eliminates password usage.
-f parameterssh-keygen -t ed25519 -C "example@gmail.com" -f ~/.ssh/special_vps_keyThe generated public key is transferred to the server with this command. After this process, the server recognizes the key and allows SSH connections made with it.
authorized_keys file on the serverssh-copy-id -i ~/.ssh/special_vps_key.pub master@111.111.111.111This file is the configuration file used on the client side for SSH connections. Each Host entry contains connection settings for a specific server.
nano ~/.ssh/configWith this configuration, a short alias is assigned to the server. Now, you can connect simply by using the ssh myserver command in the terminal.
Host myserver is the short alias for the serverHostName specifies the IP address of the serverUser defines the user to connect withIdentityFile points to the path of the SSH key to be usedAddKeysToAgent yes ensures the key is added to the SSH agentUseKeychain yes allows the passphrase to be stored in the macOS KeychainHost myserver
HostName 111.111.111.111
User master
IdentityFile ~/.ssh/myserver_vps_key
AddKeysToAgent yes
UseKeychain yesThis stage is the most critical step for server security. These actions are performed inside the server after logging in with the SSH key. Steps must be followed carefully as errors can completely block server access.
This file is the main configuration for the SSH service, which is the server's gateway to the outside world. Changes here determine who can connect and by which methods. Modify the following settings to neutralize brute-force attacks.
PermitRootLogin no Completely prevents the root user from logging in directly via SSH.PasswordAuthentication no Disables password-based SSH login. Only devices with authorized SSH keys can now access the server.sudo nano /etc/ssh/sshd_configThe SSH service is restarted to apply the changes. After this step, the server will only accept connections via SSH keys.
sudo systemctl restart sshIn this stage, the server's network ports are brought under control. UFW (Uncomplicated Firewall) ensures only permitted services can access the server and blocks all unauthorized connections.
The SSH port must be open to maintain remote access. This rule must be added before enabling the firewall; otherwise, access to the server may be lost entirely.
sudo ufw allow sshThese rules allow websites to broadcast via ports 80 (HTTP) and 443 (HTTPS). These ports must be open for visitors to access the site.
sudo ufw allow http
sudo ufw allow httpsAll defined rules take effect, and the firewall is activated. From this point on, the server will only respond to connections from allowed ports.
sudo ufw enableThis command checks if the firewall is active and which ports are open. An Active status indicates successful configuration.
sudo ufw statusIn this stage, Nginx—the web service provider—is installed. The package list is updated before installation, Nginx is installed, and the system is rebooted if necessary.
Installs Nginx, one of the world's fastest web server softwares. The -y flag automatically answers "yes" to confirmation prompts.
sudo apt install nginx -yNecessary for the server to boot with the latest kernel version, especially after critical updates like Pending kernel upgrade. Your connection will drop; wait about 1 minute and reconnect using ssh myserver.
sudo rebootConfirms Nginx is active (running) without errors. If successful, entering the server's IP in a browser will display the default Nginx landing page.
sudo systemctl status nginxIn this stage, default configurations and sample files provided with Nginx are removed. The goal is to ensure only our defined rules and projects run on the server.
Removes default site settings from both the archive and active directories. This prevents conflicts between the server and old or unnecessary rules.
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/defaultCompletely deletes the web directory containing the Welcome to nginx page. No default content will be automatically published on the server anymore.
sudo rm -rf /var/www/htmlRestarts the service to apply the cleanup. Nginx will now wait for new site configurations defined by us.
sudo systemctl restart nginxA centralized, organized, and secure workspace is created for all Frontend and Backend projects. The goal is to separate projects from system files, prevent permission confusion, and build a sustainable server structure.
The main folder for all projects is created hierarchically under the secure /var/www directory. This will be the central workspace for all apps.
sudo mkdir -p /var/www/appsOwnership of the project directory is transferred to the master user. This eliminates the need for sudo during npm install , git clone , or file editing.
sudo chown -R master:master /var/www/appsAll future projects will be located in their own subfolders under this directory. All development and deployment operations are carried out within this workspace.
cd /var/www/appsThis ensures that all code changes on the server are clearly attributed to the correct user. Git configuration is critical for clean and professional logs in GitHub/GitLab integrations.
Checks if Git is installed on the system. It usually comes default with Ubuntu 24.04, but verifying is best practice.
git --versionEnsures all commit operations performed on the server are signed with your name.
git config --global user.name "Sezer Gec"Sets the official email address to appear in Git logs. This should match your GitHub/GitLab profile.
git config --global user.email "example@gmail.com"NVM is installed to manage Node.js versions professionally. Then, Node.js v24 is installed along with PM2 to ensure applications run stably in the background.
The healthiest method for managing Node.js versions. Uses the script recommended by the official NVM project.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bashMakes NVM commands available in the current session without needing to restart the terminal.
. "$HOME/.nvm/nvm.sh"The current major version Node.js v24.x and the modern npm v11.x are included in the system.
nvm install 24
node -v
npm -vA process manager that handles Node.js apps in the background with auto-restart and logging features. Sudo is not required since NVM is used.
npm install -g pm2Defines necessary permissions for PM2 processes to start automatically upon server reboot. After the command, run the line starting with sudo env PATH=... provided by the terminal.
pm2 startupThis command runs your application under PM2 via npm start, names itexample-project, and manages its background execution.pm2 save records current processes to ensure they restart after a reboot.
pm2 start npm --name "example-project" -- start
pm2 saveA dedicated host (server block) is defined in Nginx for the application. The goal is to route external HTTP requests through Nginx to the backend port securely. This is a fundamental requirement for production environments.
In the sites-available directory, a file is created to silently drop unrecognized domain requests . This increases security by catching requests from unknown or malicious domains.
sudo nano /etc/nginx/sites-available/000-catch-allWith this configuration, Nginx acts as the default server and silently closes HTTP (Port 80) requests from IPs or unrecognized domains. Your application will only be accessible through its own domain.
server {
listen 80 default_server;
server_name _;
return 444;
}The configuration file is linked to the sites-enabled directory. Nginx only runs files in this directory. This approach allows for quick enabling/disabling without deleting configurations.
sudo ln -s /etc/nginx/sites-available/000-catch-all /etc/nginx/sites-enabled/A project-specific file is created in sites-available. This is the primary center for defining all behaviors for the respective domain or service.
sudo nano /etc/nginx/sites-available/example-siteThis configuration directs external HTTP (Port 80) requests to the Node.js application (Port 3000) running in the background.
server {
listen 80;
server_name example.com;
}Links the configuration to sites-enabled to make the site live.
sudo ln -s /etc/nginx/sites-available/example-site /etc/nginx/sites-enabled/This setup includes global proxy and security settings used across all apps. A central config file is prepared and can be included in all Nginx hosts, ensuring HTTPS redirection and security headers are automatically applied.
sudo mkdir -p /etc/nginx/config
sudo nano /etc/nginx/config/proxy_settings.confThe proxy_settings.conf file contains all global proxy and security headers. This ensures your app runs securely, handles SSL traffic centrally, and applies necessary security headers.
proxy_http_version 1.1;
# Header settings
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Origin $http_origin;
proxy_set_header Content-Type $content_type;
# Cache and security
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
# Additional security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;The nginx -t command checks for syntax errors. If successful, Nginx is restarted to apply new settings.
sudo nginx -t
sudo systemctl restart nginxAPI services are secured over HTTPS using free SSL certificates from Let’s Encrypt. Certbot automatically detects Nginx configurations and manages background renewal. HTTPS is considered mandatory for mobile apps and production environments.
Installs the Certbot tool and Nginx integration package. This eliminates manual certificate and configuration tasks.
sudo apt update
sudo apt install certbot python3-certbot-nginx -yThis command generates an SSL certificate via Let’s Encrypt and saves it under /etc/letsencrypt/live/example.com/. It does not modify Nginx or deploy automatically.
sudo certbot certonly --manual -d example.comThis configuration runs your app securely over HTTPS, redirects HTTP requests to HTTPS, and manages SSL certificates alongside proxy settings.
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:3000;
include /etc/nginx/config/proxy_settings.conf;
}
}
Tests the automatic renewal mechanism with a dry run. It doesn't perform a real renewal but ensures everything is set up correctly for when the certificate expires.
sudo certbot renew --dry-runUsed to locate where the domain is referenced within Nginx or Let’s Encrypt directories, helpful for troubleshooting or identifying old certificates.
sudo grep -R "example.com" /etc/nginx /etc/letsencrypt -nCompletely removes a certificate and reloads Nginx to clean up the configuration.
sudo certbot delete --cert-name example.com --non-interactive
sudo nginx -t
sudo systemctl reload nginx