Building a Server on Digital Ocean With Ubuntu and Nginx

Here are various notes I've made for creating a server. These rules are for Ubuntu running Nginx.


  1. Create Droplet (sudo isn't neccessary for 1-6, as you're root but it's a good habit to get into.)

  2. SSH into server - ssh root@{ip-address}

  3. sudo apt-get update; sudo apt-get upgrade -y

  4. sudo apt-get install fail2ban -y

  5. sudo adduser {username}

    • fill in user info.
  6. Add user to sudo group (two ways to do this, but the second one doesn't scale as well)

    • first way: sudo usermod -aG sudo {username}
    • second way: sudo visudo.
      • under root ALL=(ALL:ALL) ALL add {username} ALL=(ALL:ALL) ALL
  7. exit server

  8. create new ssh key
    ssh-keygen -t rsa -b 4096

    • FWIW, RSA can be cracked by a quantum computer, if you worry about that ish.
  9. Copy key to server.

    • ssh-copy-id -i ~/.ssh/{name of key}.pub “username@ip_address”
  10. SSH into your server (not root)

  11. sudo (nano | vim | vi) /etc/ssh/sshd_config

    • change ssh port from 22 (this will just keep the logs a little cleaner)
    • find PermitRootLogin and change it to PermitRootLogin no
    • find PasswordAuthentication and change it to PasswordAuthentication: no
    • exit/save file and run: sudo service ssh restart
  12. Add fire wall rules - with ex. rules

    • sudo ufw allow 80 - http
    • sudo ufw allow 443 - https
    • sudo ufw limit ssh - rate limit SSH
    • sudo ufw allow 22/{your new ssh port} -ssh
    • sudo ufw allow from to any port 5432 - postgres from remote server, change subnet
    • sudo ufw enable
  13. Set up pre firewall rules (this will ghost the server - ping packets will be dropped)

    • sudo (nano | vim | vi) /etc/ufw/before.rules
    • DROP everything related to all ICMP/Pinging - I believe there are 8-10 of these in total

      • these are usually the fourth and fifth blocks.

      • these two blocks have ICMP mentioned in the comments above them.

    • exit/save file and run: sudo ufw reload

Add SSL to site - Let's Encrypt

  1. sudo apt-get install letsencrypt

  2. sudo letsencrypt certonly --standalone --rsa-key-size 4096 --force-renew -d -d

    • nginx needs to be turned-off

      -sudo service nginx stop

  3. sudo crontab -e

    • may need to choose editor
  4. at the bottom of the file add: service nginx stop;30 2 * * 1 sudo /usr/bin/letsencrypt renew --rsa-key-size 4096 >> /home/${username}/le-renew.log;service nginx start

  5. create DH key:

    • sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 3072


I may upload the actual nginx file later, but for now, I'll just add the neccessary parts.

 Vary Upgrade-Insecure-Requests;
 add_header X-Content-Type-Options nosniff;
 add_header X-Xss-Protection "1; mode=block";
 add_header Content-Security-Policy "default-src https: data: 'unsafe-inline' 'unsafe-eval'";
 add_header X-Frame-Options "SAMEORIGIN";
 server_tokens off;

 ssl on;
 ssl_certificate /etc/letsencrypt/live/{your site}/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/{your site}/privkey.pem;
 ssl_dhparam /etc/ssl/certs/dhparam.pem;

 # As of writing this, TSLv1.3 hadn't be released for Nginx, this has most likely changed.
 #ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
 ssl_protocols TLSv1.1 TLSv1.2;


 ssl_prefer_server_ciphers on;
 ssl_ecdh_curve secp384r1;
 ssl_session_cache shared:SSL:10m;
 ssl_session_tickets off;
 add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload" ;
 ssl_session_timeout 2h;
 ssl_stapling on;
 ssl_stapling_verify on;

exit and restart nginx - sudo service nginx restart

Double check everything on and

More information can be found here:


This is honestly a bit of a lengthy process. However, Justin Ellingwood has written a great piece over at DO on it.

Configure Postfix

To change postfix from sending from user@domain-localhost, I found the best way is to change the hostname for Ubuntu

  1. hostname

  2. sudo (nano | vim | vi) /etc/hostname and change it to

  3. You will have to exit your droplet and ssh back into it.

Your hostname will have changed

* Note With the above you will receive an email with your super user account. This isn't ideal. However, you can set up postfix to use an email relay to send stuff from Google instead, see the Linode link at the bottom for instructions on how to do so.

Get email notifications when a user/root is logged into

  1. This is assuming that the tripwire has been set, forcing you to have downloaded a mail client - in the case above, it would be Postfix.

  2. Let's start with root

    • sudo su

    • vi /root/.bashrc

    • At the bottom of the file add: mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`"

  3. Your user

    • vi /home/${username}/.bashrc

    • At the bottom of the file add: echo 'ALERT - Root Shell Access (ServerName) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`"

* Note With the above you will receive an email with your super user account. This isn't ideal. However, you can set up postfix to use an email relay to send stuff from Google instead, see the linode link at the bottom for instructions on how to do so.

Enable Automatic Updates

  1. sudo apt-get install unattended-upgrades
  2. sudo vi /etc/apt/apt.conf.d/10periodic
  3. Update the following lines to resemble below:
    APT::Periodic::Update-Package-Lists "1";
    APT::Periodic::Download-Upgradeable-Packages "1";
    APT::Periodic::AutocleanInterval "7";
    APT::Periodic::Unattended-Upgrade "1";
  1. sudo vi /etc/apt/apt.conf.d/50unattended-upgrades
  2. Update the file to resemble below - currently it's just set to do security updates, but uncommenting the second line will include package updates.
    Unattended-Upgrade::Allowed-Origins {
        "Ubuntu lucid-security";
     // "Ubuntu lucid-updates";

Create an SSH config file - OSX

touch ~/.ssh/config

sudo (nano | vim | vi) ~/.ssh/config

Host {name} - ex. Personal

HostName {your ip address} - ex.

Port {your ssh port} - ex. 22

User {your username} - ex. root (don't use root!)

IdentityFile ~/.ssh/example.key - the private key for the server


Host Personal  
Port 2222  
User Jack  
IdentityFile ~/.ssh/personal  

can now use ssh as: ssh Personal


  • Note SWAPs can be harmful to older SSDs...

  • sudo fallocate -l 4G /swapfile

  • ls -lh /swapfile
  • sudo chmod 600 /swapfile
  • ls -lh /swapfile
  • sudo mkswap /swapfile
  • sudo swapon /swapfile
  • sudo nano /etc/fstab
  • Add to the bottom of the file:
    - /swapfile none swap sw 0 0
  • sudo sysctl vm.swappiness=10
  • sudo sysctl vm.vfs_cache_pressure=50
  • sudo nano /etc/sysctl.conf
  • Add to the bottom:
    • vm.swappiness=10
    • vm.vfs_cache_pressure = 50

Increase PHP and Nginx memory sizes:

php5 location = /etc/php5/fpm/php.ini

  1. sudo (nano | vim | vi ) /etc/php7.0/fpm/php.ini
  2. Update the following lines to resemble below:
    upload_max_filesize = 50M
    post_max_size = 50M
    max_execution_time = 120
    max_input_time = 120
    memory_limit = 64M
  1. sudo (nano | vim | vi) /etc/nginx/nginx.conf
  2. Update client_max_body_size to: client_max_body_size 100M;

Remove Nginx headers

  1. add server_tokens off to /etc/nginx/sites-available/defult/
    • this will only remove the version

*I haven't tested this, but if you want to change the server in the response

  1. sudo apt-get install nginx-extras
  2. more_set_headers 'Server: some server name';

Test it: curl -I ${website}

Remove PHP headers

  1. Open php.ini

    • sudo (nano | vim | vi) /etc/php/7.0/fpm/php.ini
  2. uncomment cgi.fix_pathinfo=1 - remove the semi-colon in front of it, and change the value to 0

    • cgi.fix_pathinfo=0
  3. Remove X-Powered-by

    • expose_php = 0
  4. Restart php: sudo systemctl restart php7.0-fpm

Remove Express headers


Serving and Securing assets with S3 over CloudFront


Hosting multiple domains on a single droplet


  1. create an file for each domains to house the server blocks in /etc/nginx/sites-enabled/
    1. name of file should be like,
    2. you can put the file /etc/nginx/sites-available/ but you will have to create a symbolic link ln to /etc/nginx/sites-enabled/
    3. it should follow the same format as the default file in /etc/nginx/sites-enabled/
  2. sudo vi /etc/nginx/nginx.conf
    1. Uncomment: servernameshashbucketsize 64;

Mitigate DoS on Nginx

  1. add the limits outside the server block
  2. add the client timeouts to close long connections
  3. add the limits to the location
  4. deny areas where people may try to access during a DDoS
limit_req_zone $binary_remote_addr zone=one:10m rate=3r/m;  
limit_conn_zone $binary_remote_addr zone=addr:10m;  
server {  
    # other stuff

    client_body_timeout 5s;
    client_header_timeout 5s;

    location / {
        limit_req zone=one burst=5;
        limit_conn addr 10;
        # other stuff

    ## can change this to /login or where ever. You may just want to set it to your ip
    location /wp-login.php {
        # allow 111.333.444.555
        deny all;

    #other stuff


Later I may try and update this to use brotli

  1. sudo vi /etc/nginx/nginx.conf
  2. Make yours look similar to:
        # Gzip Settings

        gzip on;
        gzip_disable "msie6";

        gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 4 42k;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/x-font-ttf application/x-font-opentype font/eot font/opentype image/svg+xml font/otf application/xml+rss text/javascript;

Double check this with GTmetrix and Web Page Test

Enable caching

  1. sudo vi /etc/nginx/sites-available/default
  2. Add this somewhere in your server block:
    • location ~* \.(ico|svg|css|js|gif|jpe?g|png|woff|woff2)$ { expires 30d; add_header Pragma public; add_header Cache-Control "public"; }

Double check this with GTmetrix and Web Page Test

Enable http2

If you are using ssl, which you should be, only add this to your ssl server block.

  1. sudo vi /etc/nginx/sites-available/default
  2. update ssl server block:
    server { listen 443 http2 default_server; listen [::]:443 ssl http2 default_server; #other stuff

Double check this with Lighthouse

Closing notes:

There are other practices that need to be followed to ensure security - always using sftp, using different keys for different servers, always running sudo apt-get update; sudo apt-get upgrade -y when logging into the server, not reusing the same passwords - but in the end, if someone finds a zero day in the hypervisor - none of this really matters.

Also, in the above, having the email sent showing the super user's account name isn't ideal. I currently haven't figured out a way to 'spoof' the name to just show "mail" or something. However, relaying can be done to use something like gmail - you could then use an email alias through gmail.


Here's a great video that goes into some of the stuff listed above while setting up a DO server:

Reddit discussion -


These notes are released under MIT.