Monitoring web traffic is a necessary part of the security and efficiency of your server environment within a digital infrastructure. Run a high volume website or a single application. Understanding access logs will ultimately support your ability to identify abusive IP addresses or attack vectors and improve your web server’s efficiency. Every web server logs information about all requests to the server, such as known web servers like Nginx and Apache. 

This provides useful ways of inspecting, analyzing, and summarizing IP connections directly from access logs using simple command line tools.

Understanding Web Server Access Logs

Access logs contain records of all incoming requests to your server, including the requested IP address, request method, timestamp, and user agent. Access log analysis can help you to identify patterns like: 

  • Unusual traffic spikes or repetitive hits from the same IP
  • Unauthorized scans or brute-force attempts
  • Performance bottlenecks or slow endpoints 

For Linux-based servers, access logs are typically stored in:

  • /var/log/nginx/access.log for Nginx
  • /var/log/httpd/access.log for Apache

Method 1: Quick Inspection with tail

A simple yet powerful way to inspect server activity is by using the tail command. This method allows you to quickly view the most recent requests to your server.

tail -n 100 /var/log/nginx/access.log

For Apache:

tail -n 100 /var/log/httpd/access.log

Purpose:
This method provides an immediate view of the last 100 entries, helping you identify suspicious IPs, frequent requests, or anomalies at a glance. It’s particularly useful for quick manual inspections during a live incident.

Method 2: Count Unique IPs in Logs

Understanding how many requests come from specific IPs can help detect patterns of abuse, especially during DDoS attacks or scraping attempts.

cd /var/log/nginx   # or /var/log/httpd

grep -o “[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}” access.log \

  | sort -n \

  | uniq -c \

  | sort -n

Explanation:
This command extracts all IP addresses, sorts them, counts how many times each IP appears, and sorts the results numerically.
It helps you determine which clients are making the most connections, an essential insight for diagnosing load spikes or blocking malicious IPs.

Method 3: Top IPs in the Last Hour

Monitoring traffic patterns within a specific timeframe, like the last hour, gives you better visibility into recent trends.

cat /var/log/httpd/domains/*.com.log \

  | grep “$(date -d ‘1 hour ago’ ‘+%d/%b/%Y:%H’)” \

  | awk ‘{print $1}’ \

  | sort -n \

  | uniq -c \

  | sort -nr \

  | head -10

For Nginx, replace httpd with nginx.

Purpose:
This command identifies the top 10 IPs that made the most requests in the past hour.
It’s especially useful during high traffic events or sudden server slowdowns, helping administrators isolate heavy hitters.

Method 4: IPs in the Last 10 Minutes

When monitoring short term surges, focusing on recent activity (like the past 10 minutes) offers deeper insights into live server conditions.

cat /var/log/nginx/domains/*.com \

  | grep -E “$(for i in {0..9}; do date -d “$i minute ago” ‘+%d/%b/%Y:%H:%M’; done | paste -sd’|’)” \

  | awk ‘{print $1}’ \

  | sort -n \

  | uniq -c \

  | sort -nr \

  | head -30

Purpose:
This command lists the top 30 IPs active in the last 10 minutes.
It’s ideal for detecting short-lived traffic spikes, potential brute force attempts, or sudden bursts of automated requests.

Method 5: IPs in the Last 5 Minutes

For immediate troubleshooting, narrowing the analysis to the last five minutes helps identify live issues as they occur.

cat /var/log/nginx/domains/*.com \

  | grep -E “$(for i in {0..4}; do date -d “$i minute ago” ‘+%d/%b/%Y:%H:%M’; done | paste -sd’|’)” \

  | awk ‘{print $1}’ \

  | sort -n \

  | uniq -c \

  | sort -nr \

  | head -30

Purpose:
By focusing on the most recent activity, administrators can quickly locate sources of immediate load, pinpoint misbehaving clients, or act against active abuse patterns.

Best Practices and Notes

  • For Apache logs, replace:
    • nginx – httpd
    • *.com – *.com.log
  • Adjust head -N to view more or fewer top IPs.
  • Always monitor logs during and after any high-traffic incident for patterns of recurrence.

Use these techniques for:

  • Detecting abusive traffic or brute-force attacks
  • Identifying Distributed Denial of Service (DDoS) attempts
  • Troubleshooting performance and load-related issues

Monitoring web server access logs is a crucial element of proactive server management. Many Linux command line tools, such as tail, grep, awk, and uniq, allow administrators to quickly identify bad IPs and implement the needed fixes. These straightforward methods can not only provide clarity during busy times, but also up your server’s security.

At ServerAdminz, we offer end-to-end server management and monitoring solutions that are meant to keep your infrastructure secure, stable, and optimized. We continuously monitor log data, detect anomalies in real time, and mitigate threats before they escalate. By utilizing automation with human expertise, ServerAdminz guarantees uninterrupted success and security for your web operations.