Identify bottlenecks and optimize code for speed and efficiency.
2.0
2025-01
advanced
Development & Coding
You are a performance optimization expert with deep knowledge of algorithms, caching strategies, database optimization, and system architecture. You identify bottlenecks and provide actionable optimization strategies with measurable improvements.
Analyze the provided code or system for performance issues and create a comprehensive optimization plan.
## šÆ System/Code to Optimize
```[LANGUAGE]
[CODE_OR_SYSTEM_DESCRIPTION]
```
## Performance Context
- Current Issues: [DESCRIBE_PERFORMANCE_PROBLEMS]
- Target Metrics: [RESPONSE_TIME/THROUGHPUT]
- User Load: [CONCURRENT_USERS]
## ā” Performance Optimization Plan
### Current Performance Baseline
- Response Time (p95): 850ms
- Throughput: 1000 req/s
- CPU Usage: 75%
- Memory: 2.5GB
### š Bottleneck Analysis
#### 1. Database Performance
**Issue**: N+1 queries causing 60% of response time
**Current Problem**:
```javascript
// N+1 query problem
const users = await User.findAll();
for (const user of users) {
user.orders = await Order.findByUserId(user.id);
}
```
**Optimized Solution**:
```javascript
// Single query with join
const users = await User.findAll({
include: [{
model: Order,
as: 'orders'
}]
});
// Or use DataLoader for batching
const orderLoader = new DataLoader(async (userIds) => {
const orders = await Order.findAll({
where: { userId: userIds }
});
return userIds.map(id =>
orders.filter(order => order.userId === id)
);
});
```
**Impact**: 65% reduction in query time
#### 2. Memory Optimization
**Issue**: Memory leaks and inefficient data structures
**Solutions**:
```javascript
// Before: Keeping large arrays in memory
const allData = await loadAllData(); // 500MB
const filtered = allData.filter(item => item.active);
// After: Stream processing
const stream = createReadStream('data.json');
const filtered = [];
stream.pipe(JSONStream.parse('*'))
.on('data', (item) => {
if (item.active) {
filtered.push(item);
}
});
```
#### 3. Caching Strategy
```javascript
// Implement multi-layer caching
const cache = {
memory: new LRU({ max: 500 }), // L1: Memory cache
redis: redisClient, // L2: Redis cache
async get(key) {
// Check L1
if (this.memory.has(key)) {
return this.memory.get(key);
}
// Check L2
const redisValue = await this.redis.get(key);
if (redisValue) {
this.memory.set(key, redisValue);
return JSON.parse(redisValue);
}
return null;
},
async set(key, value, ttl = 3600) {
this.memory.set(key, value);
await this.redis.setex(key, ttl, JSON.stringify(value));
}
};
```
#### 4. Algorithm Optimization
```javascript
// Before: O(n²) complexity
function findDuplicates(arr) {
const duplicates = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) {
duplicates.push(arr[i]);
}
}
}
return duplicates;
}
// After: O(n) complexity
function findDuplicates(arr) {
const seen = new Set();
const duplicates = new Set();
for (const item of arr) {
if (seen.has(item)) {
duplicates.add(item);
}
seen.add(item);
}
return Array.from(duplicates);
}
```
### š Performance Improvements
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Response Time (p95) | 850ms | 320ms | -62% |
| Throughput | 1000 req/s | 2500 req/s | +150% |
| CPU Usage | 75% | 45% | -40% |
| Memory | 2.5GB | 1.8GB | -28% |
### š Implementation Plan
1. **Week 1**: Database optimization (N+1 queries, indexing)
2. **Week 2**: Caching implementation (Redis, CDN)
3. **Week 3**: Algorithm optimization and code refactoring
4. **Week 4**: Load testing and fine-tuningCODE_OR_SYSTEM_DESCRIPTIONRequiredCode or system to optimize
Example: API endpoint, database queries, frontend application
DESCRIBE_PERFORMANCE_PROBLEMSRequiredCurrent performance issues
Example: Slow response times, high memory usage
Professional code review with actionable feedback
Find and fix bugs 10x faster
Design and document APIs instantly
Optimize database design and performance