Why Logging Matters
Logs are your eyes when you can't watch directly:
- Debug production issues
- Understand system behavior
- Track errors and exceptions
- Audit important actions
Log Levels
Standard Levels
DEBUG # Detailed information for debugging
INFO # Normal operation events
WARNING # Something unexpected but handled
ERROR # Something failed
CRITICAL # System is in trouble
When to Use Each
DEBUG:
logger.debug(f"Processing item {item.id} with params {params}")For development and troubleshooting. Not for production usually.
INFO:
logger.info(f"User {user_id} logged in successfully")
logger.info("Server started on port 8080")Normal events worth recording.
WARNING:
logger.warning(f"Rate limit approaching: {current}/{limit}")
logger.warning("Deprecated function called, will be removed in v2")Handled problems that might need attention.
ERROR:
logger.error(f"Failed to connect to database: {e}")
logger.error("Payment processing failed", exc_info=True)Failures that need investigation.
CRITICAL:
logger.critical("Database connection pool exhausted")
logger.critical("Out of disk space, cannot continue")System-level failures requiring immediate attention.
What to Log
Do Log
# Application lifecycle
logger.info("Application starting")
logger.info("Application shutting down")
# Important operations
logger.info(f"Order {order_id} placed by user {user_id}")
logger.info(f"Email sent to {recipient}")
# Errors with context
logger.error(f"Failed to process order {order_id}: {error}")
# Performance metrics
logger.info(f"Request processed in {duration_ms}ms")
Don't Log
# Sensitive data
logger.info(f"User password: {password}") # NEVER!
logger.info(f"API key: {api_key}") # NEVER!
logger.info(f"Credit card: {card_number}") # NEVER!
# Every loop iteration (in production)
for item in items:
logger.debug(f"Processing {item}") # Too noisy
# Duplicate information
logger.info("Starting function")
logger.info("Function started") # Redundant
Structured Logging
Instead of String Formatting
# Hard to parse
logger.info(f"User {user_id} bought {item_count} items for ${total}")
# Structured (easier to analyze)
logger.info("Purchase completed", extra={
"user_id": user_id,
"item_count": item_count,
"total": total,
"currency": "USD"
})
JSON Logging
import json
import logging
class JsonFormatter(logging.Formatter):
def format(self, record):
log_data = {
"timestamp": self.formatTime(record),
"level": record.levelname,
"message": record.getMessage(),
"logger": record.name,
}
if hasattr(record, 'extra'):
log_data.update(record.extra)
return json.dumps(log_data)
Output:
{"timestamp": "2025-02-01 10:00:00", "level": "INFO", "message": "Purchase completed", "user_id": "123", "item_count": 3}
Logging Patterns
Request Context
Include identifiers that connect related logs:
# At request start
request_id = generate_request_id()
logger.info(f"[{request_id}] Request started")
# Throughout processing
logger.info(f"[{request_id}] Fetching user data")
logger.info(f"[{request_id}] Processing complete")
Error Logging
Include stack traces:
try:
risky_operation()
except Exception as e:
logger.error(f"Operation failed: {e}", exc_info=True)
Entry/Exit Logging
For important functions:
def process_order(order_id):
logger.info(f"Processing order {order_id}")
try:
result = do_processing()
logger.info(f"Order {order_id} processed successfully")
return result
except Exception as e:
logger.error(f"Order {order_id} processing failed: {e}")
raise
Log Management
Rotation
Don't let logs fill the disk:
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler(
'app.log',
maxBytes=10*1024*1024, # 10MB
backupCount=5
)
Retention
Keep logs long enough for debugging:
- Development: days
- Production: weeks to months
- Compliance: as required
Aggregation
For multiple services:
- Centralized logging (ELK, CloudWatch, etc.)
- Consistent format across services
- Searchable and filterable
Common Mistakes
Too Verbose
# Too much noise
logger.debug(f"Entering function")
logger.debug(f"Variable x = {x}")
logger.debug(f"Variable y = {y}")
logger.debug(f"Calculating...")
logger.debug(f"Result = {result}")
logger.debug(f"Exiting function")
Not Verbose Enough
# Not enough context
logger.error("Error occurred") # What error? Where?
Wrong Level
# Should be ERROR, not INFO
logger.info(f"Database connection failed: {e}")
# Should be DEBUG, not INFO
logger.info(f"Loop iteration {i}")
Logging Sensitive Data
# WRONG
logger.info(f"User auth: {username}:{password}")
logger.debug(f"API response: {response}") # Might contain secrets
Reading Logs
Finding Issues
# Find errors
grep "ERROR" app.log
# Find specific request
grep "request_id=abc123" app.log
# Recent errors
tail -f app.log | grep ERROR
# Count by level
grep -c "ERROR\|WARNING" app.log
Following a Request
# Get all logs for a request
grep "request_id=abc123" app.log
# See the flow
grep "request_id=abc123" app.log | head -20
Logging Setup
Python
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
Node.js
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: 'app.log' })
]
});
Conclusion
Good logging:
- Uses appropriate levels
- Includes useful context
- Avoids sensitive data
- Is structured for analysis
- Helps you debug production issues
Log like you'll need to debug at 3am—because someday you will.
Next: Environment Variables - Managing configuration