Updated 1 day ago
Your Linux server is constantly talking to itself. Every service, every kernel event, every login attempt gets recorded somewhere. Logs are the system's memory—and when something breaks at 3 AM, they're the only witness to what happened.
Knowing where to look and how to read what you find is the difference between debugging blindly and understanding exactly what went wrong.
Where Logs Live
Most logs congregate in /var/log. This is the system's filing cabinet:
The -h flag shows human-readable file sizes—useful for spotting logs that have grown suspiciously large.
What you find depends on your distribution and installed software, but the structure is consistent: text files (or compressed archives of text files) organized by service or function.
The Main System Logs
The general system log—your first stop for most investigations—lives at:
These files contain messages from the kernel and various services. When you don't know where else to look, start here.
Kernel messages specifically—hardware detection, driver loading, kernel panics—appear in:
The dmesg command also shows the kernel ring buffer directly, which is useful because it captures messages from before the filesystem was available during boot.
Boot sequence logs live in:
When a service fails to start on boot, this file tells you why.
Authentication Logs: The Crime Scene Record
Every login attempt—successful or failed—gets recorded:
This is where you find evidence. SSH logins, sudo usage, failed password attempts. A line like:
That's someone trying your door handle. If you see hundreds of these from the same IP, that's a brute force attack in progress. If you see successful logins from IPs you don't recognize, you have a problem.
To find failed login attempts:
Multiple failed attempts from one IP suggest a targeted attack. Failed attempts from many different IPs suggest a botnet.
Web Server Logs
Apache logs typically live at:
Nginx uses:
Both maintain two primary logs: access.log records every HTTP request. error.log records what went wrong.
An access log entry looks like:
Left to right: client IP, timestamp, what they requested, the response code (200 means success), bytes sent, referrer, and browser identification.
When your site is slow, the access log shows you what's being hit. When it's broken, the error log shows you why.
Mail Server Logs
Mail servers log obsessively—they have to, because email delivery involves so many handoffs that could fail:
To trace a specific message:
You'll see the message arrive, get processed, and either deliver or fail—with explanations at each step.
Database Logs
MySQL and MariaDB:
PostgreSQL:
Database logs record startup failures, connection problems, and—if enabled—slow queries that might be hurting performance.
The Systemd Journal
Modern Linux distributions using systemd maintain a binary journal alongside (or instead of) traditional text logs. Access it with journalctl:
The journal is searchable, filterable, and includes metadata that text logs don't. For modern systems, it's often the better starting point.
Log Rotation: Where Old Logs Go
Logs grow forever unless something stops them. The logrotate utility periodically archives old logs and starts fresh files:
When investigating something that happened days ago, remember to check the rotated files:
The zgrep command searches compressed files without decompressing them first.
Reading Log Entries
Every log entry tells the same story: something happened, and here's proof it happened. The timestamp is when. The service name is who. Everything after is what and why.
Syslog format:
Timestamp, hostname, service (with process ID), then the message. This entry says: at 10:30:45, someone successfully logged in as "user" from that IP address using a public key.
Once you can read the format, you can use grep, awk, or cut to extract patterns.
Permissions
Most logs require root to read:
The tail -f command follows a file as new lines are added—useful for watching logs in real time.
Common Investigations
Service won't start:
The last 100 lines usually contain the failure reason.
Network problems:
Kernel messages about interfaces, routing, or connectivity.
Security concerns:
Who tried to get in? Who succeeded?
Disk filling up:
Find which logs are consuming space, then investigate why they're growing so fast.
Building a Timeline
When something goes wrong, timestamps let you correlate events across different logs. An attack might appear in auth.log, syslog, and application logs at the same moment. A service crash might show warnings in one log seconds before errors appear in another.
The logs are witnesses. Cross-examine them.
Frequently Asked Questions About Linux Server Logs
Was this page helpful?