Tools: How I Hardened an Ubuntu Server After Security Alerts Started Showing Up

Tools: How I Hardened an Ubuntu Server After Security Alerts Started Showing Up

First, I did not panic

The first real problem: SSH was being targeted

I installed Fail2Ban first

The default Fail2Ban settings were a bit weak

Then I checked SSH itself

I hardened SSH carefully, not aggressively

I did not rush to enable another firewall

I reviewed exposed ports next

The vulnerability count mostly came from outdated packages

Then I ran the real upgrade

The server still needed a reboot

What changed after all of this

Before

Commands I actually used

Install and configure Fail2Ban

Harden SSH

Review updates before applying them

Apply upgrades

Verify Docker before reboot

Reboot and verify afterward

A few lessons I took from this

1. Security alerts are not the same as compromise

2. SSH deserves attention early

3. Hardening should not be reckless

4. Updates matter more than people like to admit

5. A reboot is sometimes part of the patch

Final thoughts A while ago, I was asked to check one of our Ubuntu servers because the cloud dashboard had started showing a lot of security alerts. Nothing was visibly broken.

The apps were still running.Docker containers were still up.

Nginx was still serving traffic. But the security panel was reporting things like: That kind of dashboard can make anyone nervous. The first thought is usually: “Did someone already get into the server?” Fortunately, in this case, no. But it was still a warning sign. The server was clearly being probed, and it had enough outdated packages that leaving it alone would be a bad idea. So I went through a careful hardening process, one step at a time, without breaking the running apps or locking myself out of SSH. This article is a breakdown of that process. The environment was pretty typical: Nothing exotic. Just a real production-style server that needed cleanup. When security alerts start piling up, it is tempting to immediately run a bunch of commands and “fix everything.” That is usually how people create a second problem. So before changing anything, I checked the current state of the server. That gave me a baseline before touching anything important. A lot of server work gets easier when you resist the urge to rush. One of the alerts pointed to repeated login attempts on port 22. This is not unusual. Public servers get scanned constantly. Bots will try common usernames all day, every day. So the presence of SSH intrusion attempts did not automatically mean the server had been compromised. But it did mean one thing very clearly: the server was already getting attention from the internet. That alone is enough reason to harden SSH before doing anything else. The safest immediate improvement was adding brute-force protection. So I installed Fail2Ban: Then I checked whether the SSH jail was working: Once it was active, I could already see failed attempts being counted, and banned IPs starting to appear. It meant the server could now automatically react when bots kept hammering SSH. This is one of those changes that gives quick value with relatively low risk. After that, I checked the active values for the SSH jail. What I found was pretty standard: The retry count was fine, but the ban time felt too short for a public server. Ten minutes is not much. Bots can just come back later. So I created a small local override and made the ban time longer: Then I restarted Fail2Ban: That gave the server a firmer stance without becoming overly aggressive. I like this kind of change because it is practical. It does not try to be clever. It just makes repeated bad behavior more expensive. The next question was simple: Was password authentication still enabled? If you are already connecting with SSH keys, leaving password login enabled just creates an unnecessary attack surface. So I checked the active SSH behavior and confirmed that: That combination is common on older or quickly provisioned servers. It works, but it is not where you want to stay. At this point, I could have gone straight to PermitRootLogin no and started locking everything down hard. But that is not always the smartest first move. When you are working on a live server, especially remotely, the biggest mistake is locking yourself out in the name of security. So I took the safer path: I added this SSH override: That setup gave me a better balance: Before reloading SSH, I validated the config: No output meant the syntax was valid. Only then did I reload SSH: And most importantly, I did not close my current session. I opened a second terminal and tested a fresh SSH login first. That single habit probably prevents more disasters than any security tool ever will. I also checked whether the OS-level firewall was active. In my case, it was not. Now, a lot of hardening guides immediately jump to “enable UFW.” That can be fine, but only if you fully understand your network setup. This server already had cloud-level firewall rules controlling inbound traffic, so I chose not to add another layer just for the sake of it. At that moment, my priorities were: Adding UFW without a clear need would have added complexity, not clarity. I am not against host firewalls. I just think they should be enabled intentionally, not reflexively. Then I checked what the server was listening on. The expected public ports were there: There was also another port opened by a container. At first glance, that looked suspicious. But after tracing it back, it turned out the service was not actually exposed through the cloud firewall, so it was not reachable from the internet. That was a good reminder that “listening on 0.0.0.0” and “publicly accessible” are not always the same thing. Still, it is worth checking. Hidden assumptions are where surprises usually come from. Once SSH was in a better place, I shifted focus to the vulnerability alerts. The easiest explanation turned out to be the right one: the server had a lot of pending package updates. I checked what was upgradable: Then I simulated the upgrade first instead of running it blindly: That simulation showed a large number of standard updates, including system libraries, OpenSSH-related packages, SSL-related packages, and kernel updates. That matched the security dashboard pretty well. In other words, the alerts were not pointing to some mysterious hidden malware problem. A large part of the issue was simply that the OS had fallen behind on security patches. That is good news, relatively speaking, because patching is much easier than incident response. Once the simulation looked normal, I ran: At one point, I got a prompt about an OpenSSH config file. The package manager asked whether I wanted to keep my local configuration or replace it with the package maintainer’s version. I kept the local version. That was the right move because I had just hardened the SSH config and already confirmed that I could still log in successfully. There was no reason to let the package overwrite a working, tested SSH setup in the middle of maintenance. After the upgrade finished, the output made something clear: a new kernel had been installed, but the server was still running the old one. This is an easy detail to miss. A package upgrade does not automatically mean the server is fully patched if the running kernel has not changed yet. So before rebooting, I checked that the app stack was still healthy: Only after that did I reboot the machine. Once it came back up, I verified the kernel version and checked the containers again. Everything returned cleanly. At that point, the most important patches were actually active, not just installed. By the end of the process, the server was in a much better state. None of this was flashy. But that is the point. A lot of real server hardening is not about fancy tools or dramatic incidents. It is about making calm decisions in the right order. Here is the condensed version of the flow. A noisy dashboard is not proof that someone already got in. Sometimes it just means your server is overdue for maintenance. If a server is public, SSH is one of the first things I want to review. Disabling password login and adding Fail2Ban gives immediate improvement. There is a difference between improving security and making yourself lose access. Validate configs. Test a second session. Move in a safe order. A lot of vulnerability noise comes from servers that are simply not patched regularly enough. If the kernel changed, the job is not finished until the server is actually running the new kernel. This was not some elaborate security overhaul. It was just a practical cleanup of a real Ubuntu server that had started showing signs of neglect: That is what I like about this kind of work. It reminds you that reliability and security are often built through small, sensible decisions, not dramatic one-time fixes. If you manage Ubuntu servers in the cloud, it is worth doing this kind of baseline hardening before you actually need it. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo sshd -t -weight: 600;">sudo sshd -t -weight: 600;">sudo sshd -t -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo -weight: 500;">apt -weight: 500;">update && -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install fail2ban -y -weight: 600;">sudo tee /etc/fail2ban/jail.local > /dev/null <<'EOF' [sshd] enabled = true bantime = 1h findtime = 10m maxretry = 5 EOF -weight: 600;">sudo -weight: 500;">systemctl -weight: 500;">restart fail2ban -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo sshd -t -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo sshd -t -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo tee /etc/ssh/sshd_config.d/99-hardening.conf > /dev/null <<'EOF' PasswordAuthentication no KbdInteractiveAuthentication no ChallengeResponseAuthentication no PermitRootLogin prohibit-password PubkeyAuthentication yes EOF -weight: 600;">sudo sshd -t -weight: 600;">sudo -weight: 500;">systemctl reload ssh -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt list --upgradable -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -s -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">apt -weight: 500;">upgrade -y -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo -weight: 500;">systemctl is-enabled -weight: 500;">docker -weight: 600;">sudo -weight: 500;">systemctl is-active -weight: 500;">docker -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo -weight: 500;">systemctl is-enabled -weight: 500;">docker -weight: 600;">sudo -weight: 500;">systemctl is-active -weight: 500;">docker -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo -weight: 500;">systemctl is-enabled -weight: 500;">docker -weight: 600;">sudo -weight: 500;">systemctl is-active -weight: 500;">docker -weight: 600;">sudo reboot uname -r -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo reboot uname -r -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd -weight: 600;">sudo reboot uname -r -weight: 600;">sudo -weight: 500;">docker ps -weight: 600;">sudo fail2ban-client -weight: 500;">status sshd - intrusion attempts - multiple vulnerabilities - outdated packages - suspicious SSH activity - Ubuntu Server - SSH access using public key - a cloud firewall/security group in front of the machine - what ports were open - whether Docker containers were healthy - whether brute-force protection was already installed - whether SSH was still allowing password login - how many packages were waiting for updates - maxretry = 5 - bantime = 10 minutes - public key authentication was enabled - password authentication was still enabled - root login was still too permissive - -weight: 500;">disable password-based SSH login - keep public key authentication enabled - restrict root so it can only use keys, not passwords - no password login - root access still possible through SSH key - no sudden lockout risk if I still needed root temporarily - -weight: 500;">stop brute force attempts - -weight: 500;">disable password authentication - patch the system - verify application health - 80 for HTTP - 443 for HTTPS - Docker was active - containers were running - health checks were passing - Nginx was still alive - password-based SSH login was still enabled - root SSH behavior was too loose - Fail2Ban was missing - bots were already trying to brute-force SSH - many security updates were pending - the running kernel was outdated - password-based SSH login was disabled - root access was restricted to SSH keys - Fail2Ban was installed and banning bad actors - repeated SSH attacks were getting blocked automatically - system packages were upgraded - the server was rebooted into the new kernel - Docker services were checked and confirmed healthy afterward - SSH was too open - brute-force attempts were already happening - package updates had piled up - the server needed attention before it became an incident