May 14 03:15:12 vps-host kernel: [12345.67890] Out of memory: Kill process 9876 (myapp) score 1234, message in ...
May 14 03:15:12 vps-host kernel: [12345.67890] Out of memory: Kill process 9876 (myapp) score 1234, message in ...
total used free shared buff/cache available
Mem: 31Gi 29Gi 1.2Gi 100Mi 800Mi 1Gi
Swap: 2.0Gi 500Mi 1.5Gi
total used free shared buff/cache available
Mem: 31Gi 29Gi 1.2Gi 100Mi 800Mi 1Gi
Swap: 2.0Gi 500Mi 1.5Gi
total used free shared buff/cache available
Mem: 31Gi 29Gi 1.2Gi 100Mi 800Mi 1Gi
Swap: 2.0Gi 500Mi 1.5Gi
[Service]
LimitAS=2G # Address space limit (including virtualized memory)
LimitRSS=1G # Resident Set Size limit (physical memory)
MemoryHigh=1.5G # Soft memory limit, kernel becomes more cautious when exceeded
MemoryMax=2G # Hard memory limit, OOM Killer is more likely to intervene when exceeded
[Service]
LimitAS=2G # Address space limit (including virtualized memory)
LimitRSS=1G # Resident Set Size limit (physical memory)
MemoryHigh=1.5G # Soft memory limit, kernel becomes more cautious when exceeded
MemoryMax=2G # Hard memory limit, OOM Killer is more likely to intervene when exceeded
[Service]
LimitAS=2G # Address space limit (including virtualized memory)
LimitRSS=1G # Resident Set Size limit (physical memory)
MemoryHigh=1.5G # Soft memory limit, kernel becomes more cautious when exceeded
MemoryMax=2G # Hard memory limit, OOM Killer is more likely to intervene when exceeded
# Check current value <figure> <Image src={cover} alt="Graph showing memory usage on VPS and OOM Killer logs" />
</figure> cat /proc/sys/vm/swappiness # Change temporarily (lost after reboot)
sudo sysctl vm.swappiness=10 # Change permanently (by editing /etc/sysctl.conf file)
# Add the following line to the file:
# vm.swappiness = 10
sudo sysctl -p
# Check current value <figure> <Image src={cover} alt="Graph showing memory usage on VPS and OOM Killer logs" />
</figure> cat /proc/sys/vm/swappiness # Change temporarily (lost after reboot)
sudo sysctl vm.swappiness=10 # Change permanently (by editing /etc/sysctl.conf file)
# Add the following line to the file:
# vm.swappiness = 10
sudo sysctl -p
# Check current value <figure> <Image src={cover} alt="Graph showing memory usage on VPS and OOM Killer logs" />
</figure> cat /proc/sys/vm/swappiness # Change temporarily (lost after reboot)
sudo sysctl vm.swappiness=10 # Change permanently (by editing /etc/sysctl.conf file)
# Add the following line to the file:
# vm.swappiness = 10
sudo sysctl -p
# Find the PID of the ERP backend process (e.g., 12345)
ps aux | grep my-erp-backend # Set oom_score_adj to -500 for PID 12345
echo -500 | sudo tee /proc/12345/oom_score_adj
# Find the PID of the ERP backend process (e.g., 12345)
ps aux | grep my-erp-backend # Set oom_score_adj to -500 for PID 12345
echo -500 | sudo tee /proc/12345/oom_score_adj
# Find the PID of the ERP backend process (e.g., 12345)
ps aux | grep my-erp-backend # Set oom_score_adj to -500 for PID 12345
echo -500 | sudo tee /proc/12345/oom_score_adj
# Read oom_score for PID 12345
cat /proc/12345/oom_score
# Read oom_score for PID 12345
cat /proc/12345/oom_score
# Read oom_score for PID 12345
cat /proc/12345/oom_score - Application Errors: Applications with memory leaks gradually consume more and more memory over time.
- Unexpected Load Increase: A sudden surge in traffic or processing load can exceed available resources.
- Misconfiguration: Incorrectly set memory limits for applications or system services.
- Insufficient Resources: The server not having enough memory initially.
- Too Many Services: Running more services than necessary on a single VPS. - top Command: Shows real-time CPU and memory usage of running processes. The %MEM column indicates the percentage of total memory a process occupies. You can press Shift + M to sort processes by memory usage.
- htop Command: A more user-friendly and interactive version of the top command. It's more useful with features like colored output, process trees, and mouse support. Memory usage, CPU usage, and other information are also presented graphically.
- free Command: Summarizes the system's overall memory usage. It shows information such as used memory, free memory, buffers, and cache. The free -h command displays the output in a human-readable format (like MB, GB). - Valgrind: A powerful tool for detecting memory errors and leaks. However, it can significantly slow down application execution, so it's typically used in development environments.
- Application Performance Monitoring (APM) Tools: APM tools like Datadog, New Relic help you monitor application memory usage and detect anomalies.
- Log Analysis: Looking for clues about memory usage in application logs. - Negative Values: Reduce the likelihood of the process being terminated by OOM Killer. Used for critical services.
- Positive Values: Increase the likelihood of the process being terminated by OOM Killer. Can be used for background processes that are not prioritized. - Understanding the Problem: Realizing that what triggers OOM Killer is not OOM Killer itself, but insufficient memory or a memory leak.
- Monitoring: Continuously monitoring system resources (especially memory) allows for intervention before problems escalate. htop, free, and log analysis play a critical role here.
- Optimization: Optimizing the memory usage of applications and databases is the most permanent solution.
- Configuration: If necessary, setting service-based memory limits or adjusting kernel parameters like oom_score_adj can be used as a protective measure for the system.
- Trade-offs: Every solution has a trade-off. Using swap degrades performance, and memory limits can prevent applications from reaching their full potential. Finding the best balance is important.