← Back to Skill Explorer

📋 log-analyzer

Analyze server and application logs — find errors, patterns, anomalies. Supports journalctl, Docker, nginx, PM2.

🦞 Pixeyo CuratedRequires: ssh✓ Read source below

Parse and summarize logs from servers and applications. Finds errors, warnings, patterns, and anomalies without the user needing to SSH in and grep manually.

When to Use

Use when the user asks about errors, logs, crashes, "what went wrong", "why is X down", or wants a log summary.

Log Sources

System logs (journalctl)

ssh TARGET 'journalctl -p err --since "1 hour ago" --no-pager -n 50'

ssh TARGET 'journalctl -u nginx --since "1 hour ago" --no-pager -n 50'
ssh TARGET 'journalctl -u openclaw --since "1 hour ago" --no-pager -n 50'

ssh TARGET 'journalctl -b -p err --no-pager -n 30'

Docker container logs

ssh TARGET 'docker logs --tail 100 CONTAINER_NAME 2>&1'

ssh TARGET 'docker logs --tail 100 CONTAINER_NAME 2>&1 >/dev/null'

ssh TARGET 'docker logs --since 1h CONTAINER_NAME 2>&1'

Nginx access/error logs

ssh TARGET 'tail -100 /var/log/nginx/error.log'

ssh TARGET 'awk '''$9 >= 500''' /var/log/nginx/access.log | tail -20'

ssh TARGET 'awk '''$9 >= 400 {print $9, $7}''' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10'

PM2 application logs

ssh TARGET 'pm2 logs --lines 50 --nostream 2>&1'
ssh TARGET 'pm2 logs app-name --err --lines 50 --nostream 2>&1'

Local logs (if running on the same machine)

openclaw logs --tail 50

Analysis Process

1. Fetch relevant logs based on the user's question

2. Extract and deduplicate error messages

3. Identify patterns (repeated errors, time clusters, cascading failures)

4. Summarize in plain language

Response Format

📋 Log Analysis — myserver (last 1 hour)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 3 critical errors found

1. [nginx] 502 Bad Gateway — 14 occurrences
   First: 14:32 UTC | Last: 14:45 UTC
   URLs: /api/users (8x), /api/auth (6x)
   Likely cause: upstream server (Node.js) unresponsive

2. [docker] OOMKilled — container "api"
   Time: 14:31 UTC
   Memory limit: 512MB | Usage at kill: 511MB
   ⚡ This likely caused the 502 errors above

3. [system] disk write error on /dev/sda1
   Time: 14:28 UTC
   Possible cause: disk nearly full

💡 Suggested actions:
   1. Increase memory limit for the "api" container
   2. Check disk space: df -h
   3. Restart the api container: docker restart api

Rules

  • Always ask which server/service to analyze if not specified.
  • Limit log fetches to reasonable sizes (100-200 lines). Don't dump entire log files.
  • Sanitize output — strip IP addresses and sensitive data from summaries.
  • Focus on actionable insights, not raw log dumps.
  • This skill is read-only. Suggest fixes but never execute them without permission.
  • If logs contain credentials or tokens, do NOT include them in the summary.
📥 Download All Skills (.zip)🧩 Skill Explorer
📄 View raw SKILL.md source
--- name: log-analyzer description: "Analyze server and application logs — find errors, patterns, anomalies. Supports journalctl, Docker, nginx, PM2." user-invocable: true metadata: openclaw: requires: bins: ["ssh"] author: "pixeyo" homepage: "https://openclawcheatsheet.com/skills" --- # log-analyzer Parse and summarize logs from servers and applications. Finds errors, warnings, patterns, and anomalies without the user needing to SSH in and grep manually. ## When to Use Use when the user asks about errors, logs, crashes, "what went wrong", "why is X down", or wants a log summary. ## Log Sources ### System logs (journalctl) ```bash # Errors in last hour ssh TARGET 'journalctl -p err --since "1 hour ago" --no-pager -n 50' # Specific service logs ssh TARGET 'journalctl -u nginx --since "1 hour ago" --no-pager -n 50' ssh TARGET 'journalctl -u openclaw --since "1 hour ago" --no-pager -n 50' # Boot errors ssh TARGET 'journalctl -b -p err --no-pager -n 30' ``` ### Docker container logs ```bash # Last 100 lines from a container ssh TARGET 'docker logs --tail 100 CONTAINER_NAME 2>&1' # Only errors (stderr) ssh TARGET 'docker logs --tail 100 CONTAINER_NAME 2>&1 >/dev/null' # Logs since timestamp ssh TARGET 'docker logs --since 1h CONTAINER_NAME 2>&1' ``` ### Nginx access/error logs ```bash # Recent errors ssh TARGET 'tail -100 /var/log/nginx/error.log' # 5xx errors in access log ssh TARGET 'awk '''$9 >= 500''' /var/log/nginx/access.log | tail -20' # Top error URLs ssh TARGET 'awk '''$9 >= 400 {print $9, $7}''' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10' ``` ### PM2 application logs ```bash ssh TARGET 'pm2 logs --lines 50 --nostream 2>&1' ssh TARGET 'pm2 logs app-name --err --lines 50 --nostream 2>&1' ``` ### Local logs (if running on the same machine) ```bash # OpenClaw logs openclaw logs --tail 50 # Or: tail -50 ~/.openclaw/logs/gateway.log ``` ## Analysis Process 1. Fetch relevant logs based on the user's question 2. Extract and deduplicate error messages 3. Identify patterns (repeated errors, time clusters, cascading failures) 4. Summarize in plain language ## Response Format ``` 📋 Log Analysis — myserver (last 1 hour) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🔴 3 critical errors found 1. [nginx] 502 Bad Gateway — 14 occurrences First: 14:32 UTC | Last: 14:45 UTC URLs: /api/users (8x), /api/auth (6x) Likely cause: upstream server (Node.js) unresponsive 2. [docker] OOMKilled — container "api" Time: 14:31 UTC Memory limit: 512MB | Usage at kill: 511MB ⚡ This likely caused the 502 errors above 3. [system] disk write error on /dev/sda1 Time: 14:28 UTC Possible cause: disk nearly full 💡 Suggested actions: 1. Increase memory limit for the "api" container 2. Check disk space: df -h 3. Restart the api container: docker restart api ``` ## Rules - Always ask which server/service to analyze if not specified. - Limit log fetches to reasonable sizes (100-200 lines). Don't dump entire log files. - Sanitize output — strip IP addresses and sensitive data from summaries. - Focus on actionable insights, not raw log dumps. - This skill is read-only. Suggest fixes but never execute them without permission. - If logs contain credentials or tokens, do NOT include them in the summary.