JSON Logging¶
JSON logging outputs structured log data ideal for log aggregation, analysis, and centralized logging systems.
Overview¶
JsonLayout formats log events as JSON documents with consistent structure:
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "INFO",
"thread": "http-nio-8080-exec-1",
"logger": "io.github.dotbrains.UserService",
"message": "User alice logged in",
"mdc": {
"requestId": "req-12345",
"userId": "user-789"
}
}
Basic Setup¶
import io.hermes.core.layout.JsonLayout;
import io.hermes.core.appender.FileAppender;
JsonLayout layout = new JsonLayout();
FileAppender appender = new FileAppender("app.json");
appender.setLayout(layout);
appender.start();
logger.addAppender(appender);
Configuration¶
Full Configuration¶
JsonLayout layout = new JsonLayout();
// Include/exclude components
layout.setIncludeTimestamp(true); // ISO 8601 timestamp
layout.setIncludeLevel(true); // Log level
layout.setIncludeThread(true); // Thread name
layout.setIncludeLogger(true); // Logger name
layout.setIncludeMessage(true); // Log message
layout.setIncludeMdc(true); // MDC context
layout.setIncludeMarkers(true); // Markers
layout.setIncludeStackTrace(true); // Exception stack traces
// Timestamp format
layout.setTimestampFormat("yyyy-MM-dd'T'HH:mm:ss.SSSXXX");
// Pretty print (debugging only)
layout.setPrettyPrint(false); // Compact (production)
Minimal Configuration¶
For high-throughput scenarios:
JsonLayout layout = new JsonLayout();
layout.setIncludeThread(false);
layout.setIncludeLogger(false);
layout.setIncludeMdc(false);
// Only timestamp, level, message
Output Structure¶
Standard Log Event¶
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "INFO",
"thread": "main",
"logger": "io.github.dotbrains.UserService",
"message": "Processing user registration"
}
With MDC Context¶
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "INFO",
"thread": "http-nio-8080-exec-1",
"logger": "io.github.dotbrains.OrderService",
"message": "Order created successfully",
"mdc": {
"requestId": "req-12345",
"userId": "user-789",
"orderId": "order-456"
}
}
With Markers¶
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "WARN",
"thread": "http-nio-8080-exec-2",
"logger": "io.github.dotbrains.AuthService",
"message": "Failed login attempt",
"marker": "SECURITY",
"mdc": {
"requestId": "req-12346",
"username": "attacker",
"ip": "192.168.1.100"
}
}
With Exception¶
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "ERROR",
"thread": "http-nio-8080-exec-3",
"logger": "io.github.dotbrains.PaymentService",
"message": "Payment processing failed",
"exception": {
"class": "java.lang.IllegalStateException",
"message": "Payment gateway timeout",
"stackTrace": [
"io.github.dotbrains.PaymentService.processPayment(PaymentService.java:45)",
"io.github.dotbrains.OrderController.checkout(OrderController.java:78)",
"jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)"
],
"cause": {
"class": "java.net.SocketTimeoutException",
"message": "Read timed out",
"stackTrace": [
"java.net.SocketInputStream.socketRead0(Native Method)",
"java.net.SocketInputStream.read(SocketInputStream.java:162)"
]
}
},
"mdc": {
"requestId": "req-12347",
"orderId": "order-789"
}
}
Integration with Log Aggregation Systems¶
Elasticsearch (ELK Stack)¶
// Output to file for Filebeat
JsonLayout layout = new JsonLayout();
layout.setTimestampFormat("yyyy-MM-dd'T'HH:mm:ss.SSSXXX");
FileAppender appender = new FileAppender("app.json");
appender.setLayout(layout);
appender.start();
Filebeat configuration:
filebeat.inputs:
- type: log
paths:
- /var/log/app.json
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ["localhost:9200"]
Logstash¶
// Send directly to Logstash
LogstashAppender appender = new LogstashAppender("localhost", 5000);
appender.setApplicationName("my-service");
appender.setEnvironment("production");
appender.start();
Splunk¶
// HTTP Event Collector (HEC)
JsonLayout layout = new JsonLayout();
// Custom HTTP appender for Splunk HEC
HttpAppender appender = new HttpAppender(
"https://splunk.example.com:8088/services/collector",
"Splunk YOUR-HEC-TOKEN"
);
appender.setLayout(layout);
appender.start();
CloudWatch Logs¶
// CloudWatch Logs appender with JSON
JsonLayout layout = new JsonLayout();
CloudWatchAppender appender = new CloudWatchAppender(
"my-log-group",
"my-log-stream"
);
appender.setLayout(layout);
appender.start();
Querying JSON Logs¶
Using jq¶
# Extract all ERROR logs
cat app.json | jq 'select(.level == "ERROR")'
# Get unique loggers
cat app.json | jq -r '.logger' | sort -u
# Filter by MDC field
cat app.json | jq 'select(.mdc.requestId == "req-12345")'
# Extract messages and timestamps
cat app.json | jq -r '"\(.timestamp) \(.message)"'
# Count by log level
cat app.json | jq -r '.level' | sort | uniq -c
Elasticsearch Queries¶
{
"query": {
"bool": {
"must": [
{ "match": { "level": "ERROR" } },
{ "range": { "timestamp": { "gte": "now-1h" } } }
]
}
}
}
Splunk Queries¶
Custom Fields¶
Add application-specific fields by extending MDC:
MDC.put("environment", "production");
MDC.put("version", "1.2.3");
MDC.put("region", "us-east-1");
log.info("Application started");
Output:
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "INFO",
"message": "Application started",
"mdc": {
"environment": "production",
"version": "1.2.3",
"region": "us-east-1"
}
}
Performance Considerations¶
Compact Format¶
Use compact (non-pretty) format for production:
Selective Fields¶
Disable unused fields to reduce overhead:
JsonLayout layout = new JsonLayout();
layout.setIncludeThread(false); // Skip if not needed
layout.setIncludeLogger(false); // Skip if not needed
Async Logging¶
Combine with AsyncAppender for high throughput:
JsonLayout layout = new JsonLayout();
FileAppender fileAppender = new FileAppender("app.json");
fileAppender.setLayout(layout);
AsyncAppender asyncAppender = new AsyncAppender(fileAppender);
asyncAppender.setQueueSize(8192);
asyncAppender.start();
Best Practices¶
- Use structured MDC - Add context as MDC fields, not in messages
- Consistent field names - Standardize across services
- Include trace IDs - For distributed tracing correlation
- Use markers for categories - Security, audit, business events
- Validate JSON output - Test with
jqor validators - Compact format in production - One line per event
- Index important fields - Configure Elasticsearch/Splunk indexes
- Retention policies - Archive or delete old logs
Example: Microservices Setup¶
// Configure structured logging for microservices
JsonLayout layout = new JsonLayout();
layout.setTimestampFormat("yyyy-MM-dd'T'HH:mm:ss.SSSXXX");
// Add service metadata to MDC
MDC.put("service", "order-service");
MDC.put("version", "1.2.3");
MDC.put("environment", "production");
MDC.put("region", "us-east-1");
// File appender with async
FileAppender fileAppender = new FileAppender("/var/log/order-service.json");
fileAppender.setLayout(layout);
AsyncAppender asyncAppender = new AsyncAppender(fileAppender);
asyncAppender.setQueueSize(8192);
asyncAppender.start();
logger.addAppender(asyncAppender);
Request handling:
@Filter
public class TracingFilter implements Filter {
@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) {
try {
MDC.put("traceId", generateTraceId());
MDC.put("spanId", generateSpanId());
MDC.put("requestPath", ((HttpServletRequest) req).getRequestURI());
chain.doFilter(req, res);
} finally {
MDC.remove("traceId");
MDC.remove("spanId");
MDC.remove("requestPath");
}
}
}
Output:
{
"timestamp": "2024-01-10T10:30:45.123Z",
"level": "INFO",
"logger": "io.github.dotbrains.OrderController",
"message": "Order created",
"mdc": {
"service": "order-service",
"version": "1.2.3",
"environment": "production",
"region": "us-east-1",
"traceId": "abc123",
"spanId": "xyz789",
"requestPath": "/api/orders"
}
}