DynamoDB Best Practices: Top 10 Tips for Performance & Cost in 2025
Amazon DynamoDB provides unmatched scalability and performance for applications of all sizes, but only when used correctly. The difference between a well-optimized DynamoDB implementation and a poorly designed one can be thousands of dollars in monthly costs and orders of magnitude in performance.
This comprehensive guide covers the 10 most important best practices for DynamoDB in 2025, helping you build efficient, cost-effective applications that scale seamlessly.
1. Design for Your Access Patterns
The single most important practice in DynamoDB is designing your data model based on how your application will access the data, not how the data is structured conceptually.
Why it matters: Unlike relational databases, where you can add indexes later to optimize various queries, DynamoDB requires upfront planning of access patterns. Your table structure and key design should directly reflect how you’ll query and retrieve data.
Familiar with these Dynamodb Challenges ?
- Writing one‑off scripts for simple DynamoDB operations
- Constantly switching between AWS profiles and regions
- Sharing and managing database operations with your team
You should try Dynomate GUI Client for DynamoDB
- Create collections of operations that work together like scripts
- Seamless integration with AWS SSO and profile switching
- Local‑first design with Git‑friendly sharing for team collaboration
How to implement:
- Identify all required access patterns before creating tables
- List every query your application needs to perform
- Design primary keys and indexes to efficiently support those queries
- Consider single-table design for related entities with shared access patterns
Example: For an e-commerce application, if you need to:
- Find orders by customer ID
- Find orders by status and date
- Find order details by order ID
You might design a table with:
- Primary key:
PK
(OrderID or CustomerID with prefix) andSK
(varies by entity type) - GSI1:
GSI1PK
(Status) andGSI1SK
(Date)
# Base table query: Get order by ID
PK = "ORDER#12345"
SK = "METADATA"
# Base table query: Get customer's orders
PK = "CUSTOMER#67890"
SK begins_with "ORDER#"
# GSI query: Get pending orders from last week
GSI1PK = "STATUS#PENDING"
GSI1SK between "2025-03-23" and "2025-03-30"
Further reading: For detailed strategies on data modeling, check our DynamoDB Table Schema Design Guide.
2. Use Queries Efficiently, Avoid Full Table Scans
DynamoDB queries are optimized operations that use indexes to retrieve data, while scans read every item in a table, consuming more throughput and delivering slower performance.
Why it matters: A scan of a large table can consume all your provisioned throughput, throttle other operations, and incur significant costs.
How to implement:
- Always use Query operations instead of Scan when possible
- Design keys and indexes to support all query patterns
- When Scan is unavoidable:
- Use parallel scanning for larger tables
- Configure a smaller page size
- Implement exponential backoff for retries
Example of inefficient vs efficient pattern:
// AVOID: Scanning entire table to find items by status
const scanParams = {
TableName: 'Orders',
FilterExpression: 'OrderStatus = :status',
ExpressionAttributeValues: { ':status': 'SHIPPED' }
};
// BETTER: Query a GSI designed for this access pattern
const queryParams = {
TableName: 'Orders',
IndexName: 'StatusIndex',
KeyConditionExpression: 'OrderStatus = :status',
ExpressionAttributeValues: { ':status': 'SHIPPED' }
};
Remember: FilterExpressions are applied after data is read, so they don’t reduce read capacity consumption.
Further reading: For more details on the differences and performance implications, see our DynamoDB Scan vs Query article.
3. Choose High-Cardinality Partition Keys
The partition key determines how DynamoDB distributes your data across multiple storage partitions. Choosing a key with many unique values helps distribute traffic evenly.
Why it matters: Low-cardinality partition keys (like boolean flags or status codes) concentrate traffic on a few partitions, creating “hot” partitions that can throttle your application even when you have enough total capacity.
How to implement:
- Select attributes with many unique values for partition keys (user IDs, order IDs, etc.)
- Avoid attributes with few unique values (status codes, country codes, boolean flags)
- Use composite keys when needed to increase cardinality
- Consider adding a random suffix to distribute items with the same logical key
Poor choice examples:
- Status code as partition key (only a few distinct values)
- Date as partition key (traffic concentrated on recent dates)
- Boolean flags (only two possible values)
Good choice examples:
- User ID (many unique users)
- Order ID (unique per order)
- Session ID (unique per session)
Handling hot keys: If you must query by a low-cardinality attribute, consider adding a random suffix:
// Instead of using just "STATUS#PENDING" as the key for all pending orders
// Use "STATUS#PENDING#1", "STATUS#PENDING#2", etc.
// When writing:
item.GSI1PK = `STATUS#PENDING#${Math.floor(Math.random() * 10)}`;
// When reading (must query all partitions):
for (let i = 0; i < 10; i++) {
const results = await queryPartition(`STATUS#PENDING#${i}`);
allResults = allResults.concat(results);
}
4. Leverage Secondary Indexes (But Don’t Over-Index)
Secondary indexes allow querying data using alternative key attributes, but each index increases costs and write latency.
Why it matters: Well-designed indexes enable efficient queries on non-primary key attributes, but too many indexes increase storage costs and write capacity consumption.
How to implement:
- Create indexes only for specific access patterns
- Choose between Global and Local Secondary Indexes wisely:
- GSIs for completely different key schemas
- LSIs when you need consistent reads on the same partition key
- Be selective with attribute projections
- Consider sparse indexes to reduce size and cost
Index example:
// Creating a GSI for querying orders by customer
const params = {
TableName: 'Orders',
AttributeDefinitions: [
{ AttributeName: 'OrderId', AttributeType: 'S' },
{ AttributeName: 'CustomerId', AttributeType: 'S' },
{ AttributeName: 'OrderDate', AttributeType: 'S' }
],
KeySchema: [
{ AttributeName: 'OrderId', KeyType: 'HASH' }
],
GlobalSecondaryIndexes: [
{
IndexName: 'CustomerIndex',
KeySchema: [
{ AttributeName: 'CustomerId', KeyType: 'HASH' },
{ AttributeName: 'OrderDate', KeyType: 'RANGE' }
],
Projection: {
ProjectionType: 'INCLUDE',
NonKeyAttributes: ['OrderStatus', 'TotalAmount']
},
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
}
}
],
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
}
};
Sparse index example: To create an index that only includes items with a specific attribute:
// Only items with the 'PromotionId' attribute will be included in this index
const params = {
TableName: 'Products',
IndexName: 'PromotionIndex',
KeyConditionExpression: 'PromotionId = :promotionId',
ExpressionAttributeValues: {
':promotionId': 'SUMMER2025'
}
};
Further reading: For more details on secondary indexes, see our DynamoDB Indexes Explained guide.
5. Use Provisioned Capacity Wisely (Or On-Demand)
DynamoDB offers two capacity modes:
- Provisioned capacity: You specify read and write throughput
- On-demand capacity: You pay per request with automatic scaling
Why it matters: Choosing the right capacity mode and settings can save thousands of dollars and prevent unnecessary throttling.
How to implement:
- Choose capacity mode based on traffic patterns:
- Predictable, steady traffic → Provisioned with auto-scaling
- Unpredictable, spiky traffic → On-demand
- Low traffic with occasional spikes → Provisioned with reserve capacity
- For provisioned capacity:
- Enable auto-scaling with appropriate minimums and maximums
- Set target utilization around 70% to allow headroom for spikes
- Monitor and adjust capacity settings regularly
Auto-scaling example:
// AWS SDK v3 example for setting up auto-scaling
const applicationAutoScalingClient = new ApplicationAutoScalingClient({});
await applicationAutoScalingClient.send(new RegisterScalableTargetCommand({
ServiceNamespace: 'dynamodb',
ResourceId: 'table/Orders',
ScalableDimension: 'dynamodb:table:WriteCapacityUnits',
MinCapacity: 5,
MaxCapacity: 100
}));
await applicationAutoScalingClient.send(new PutScalingPolicyCommand({
ServiceNamespace: 'dynamodb',
ResourceId: 'table/Orders',
ScalableDimension: 'dynamodb:table:WriteCapacityUnits',
PolicyName: 'WriteAutoScalingPolicy',
PolicyType: 'TargetTrackingScaling',
TargetTrackingScalingPolicyConfiguration: {
TargetValue: 70.0,
PredefinedMetricSpecification: {
PredefinedMetricType: 'DynamoDBWriteCapacityUtilization'
}
}
}));
Further reading: Explore DynamoDB On-Demand vs Provisioned Scaling for an in-depth comparison.
6. Implement Exponential Backoff on Throttling
When DynamoDB throttles requests due to exceeding provisioned capacity, implementing an exponential backoff retry strategy helps your application recover gracefully.
Why it matters: Without proper retry handling, throttled requests can cause application failures, poor user experience, and increased latency.
How to implement:
- Catch throttling exceptions (
ProvisionedThroughputExceededException
) - Implement exponential backoff:
- Start with a base delay (e.g., 50ms)
- Double delay time on each retry
- Add jitter (random variation) to prevent retry storms
- Set a maximum number of retries
- Monitor throttling events to identify capacity issues
Implementation example (Node.js):
async function queryWithRetry(params, maxRetries = 8) {
let retries = 0;
let delay = 50; // Start with 50ms delay
while (true) {
try {
return await docClient.send(new QueryCommand(params));
} catch (err) {
if (err.name !== 'ProvisionedThroughputExceededException' || retries >= maxRetries) {
throw err; // Rethrow if not throttling or max retries exceeded
}
// Calculate delay with jitter
const jitter = Math.random() * 100;
const waitTime = delay + jitter;
console.log(`Request throttled, retrying in ${waitTime}ms (retry ${retries + 1}/${maxRetries})`);
// Wait before retry
await new Promise(resolve => setTimeout(resolve, waitTime));
// Exponential backoff
delay *= 2;
retries++;
}
}
}
Further reading: Learn more about handling throttling in our DynamoDB Throttling article.
7. Use Conditional Writes for Consistency
Conditional writes allow you to specify conditions that must be true for a write operation to succeed, enabling optimistic locking and preventing data consistency issues.
Why it matters: Without conditional writes, concurrent updates can lead to data corruption, lost updates, and race conditions.
How to implement:
- Use condition expressions with write operations
- Implement version numbers for optimistic locking
- Use attribute existence checks to prevent unintended overwrites
- Use transactions for operations that must succeed or fail as a unit
Conditional write example (optimistic locking):
// First, read the item
const { Item } = await docClient.send(new GetCommand({
TableName: 'Products',
Key: { ProductId: 'P123' }
}));
const currentVersion = Item.Version || 0;
// Then update with condition
try {
await docClient.send(new UpdateCommand({
TableName: 'Products',
Key: { ProductId: 'P123' },
UpdateExpression: 'SET Price = :price, Version = :newVersion',
ConditionExpression: 'Version = :currentVersion',
ExpressionAttributeValues: {
':price': 29.99,
':currentVersion': currentVersion,
':newVersion': currentVersion + 1
}
}));
console.log('Update successful');
} catch (err) {
if (err.name === 'ConditionalCheckFailedException') {
console.log('Item was modified by another process!');
// Handle conflict - perhaps retry or merge changes
} else {
throw err;
}
}
Further reading: Explore our DynamoDB Locking guide for more advanced concurrency patterns.
8. Exploit DynamoDB TTL for Data Expiry
Time to Live (TTL) automatically removes items from your table when they expire, helping manage data lifecycle and reduce storage costs.
Why it matters: TTL provides a maintenance-free way to purge old data, which is crucial for logs, sessions, temporary data, and compliance with data retention policies.
How to implement:
- Add a TTL attribute (typically a timestamp in seconds since epoch)
- Enable TTL on your table, specifying the TTL attribute
- Set expiration times when writing items
TTL example:
// Calculate expiration time (24 hours from now)
const expiryTime = Math.floor(Date.now() / 1000) + (24 * 60 * 60);
// Add an item with TTL
await docClient.send(new PutCommand({
TableName: 'Sessions',
Item: {
SessionId: 'abc123',
UserId: 'user456',
LastActivity: new Date().toISOString(),
ExpiresAt: expiryTime // TTL attribute
}
}));
// Enable TTL on the table (one-time setup)
await dynamoDBClient.send(new UpdateTimeToLiveCommand({
TableName: 'Sessions',
TimeToLiveSpecification: {
Enabled: true,
AttributeName: 'ExpiresAt'
}
}));
Common TTL use cases:
- Session data
- Temporary tokens and one-time codes
- Cache entries
- Event logs with retention policies
- Trial accounts or features
Further reading: For a complete guide to expiring data, see our DynamoDB TTL article.
9. Consider Item Size and Attribute Storage
DynamoDB has a 400KB maximum item size limit, and each attribute name and value contributes to this limit.
Why it matters: Exceeding item size limits causes errors, and even approaching these limits can lead to increased costs and reduced performance.
How to implement:
- Keep items small by avoiding unnecessary attributes
- Use attribute names efficiently (short attribute names save space)
- Store large objects in S3 with references in DynamoDB
- Project only necessary attributes to secondary indexes
- Consider compressing large attribute values
S3 integration example:
// Store large content in S3
const s3Client = new S3Client({});
await s3Client.send(new PutObjectCommand({
Bucket: 'my-app-content',
Key: `product-descriptions/${productId}.json`,
Body: JSON.stringify(longDescription),
ContentType: 'application/json'
}));
// Store reference in DynamoDB
await docClient.send(new PutCommand({
TableName: 'Products',
Item: {
ProductId: productId,
Name: productName,
// Instead of storing large description directly
DescriptionS3Key: `product-descriptions/${productId}.json`
}
}));
Considerations for large items:
- Items approaching 400KB incur higher RCU/WCU costs
- Network transfer time increases with item size
- Large items can lead to hot partitions
- Consider denormalizing only frequently accessed attributes
10. Utilize DynamoDB Integration Features
DynamoDB offers several advanced features and integrations that can enhance your application:
Why it matters: These features extend DynamoDB’s capabilities and integrate with AWS ecosystem.
Key features to leverage:
DynamoDB Accelerator (DAX)
DAX provides microsecond latency for read operations by caching results in memory.
// Connect to DAX instead of DynamoDB
const dax = new AmazonDaxClient({
endpoints: ['dax-endpoint.region.amazonaws.com:8111']
});
When to use DAX:
- High read-to-write ratio workloads
- Applications with repeated reads of the same items
- When you need sub-millisecond latency
DynamoDB Streams & Lambda
Streams capture item-level changes in real-time, enabling event-driven architectures.
// Enable Streams on table creation
const params = {
TableName: 'Orders',
KeySchema: [/* ... */],
AttributeDefinitions: [/* ... */],
ProvisionedThroughput: {/* ... */},
StreamSpecification: {
StreamEnabled: true,
StreamViewType: 'NEW_AND_OLD_IMAGES'
}
};
Common Stream use cases:
- Real-time dashboards and analytics
- Notifications
- Materialized views
- Cross-region replication
- Data archiving
Transactions
Transactions provide all-or-nothing operations across multiple items and tables.
// Execute a transaction across multiple items
await docClient.send(new TransactWriteCommand({
TransactItems: [
{
Put: {
TableName: 'Orders',
Item: { /* order data */ }
}
},
{
Update: {
TableName: 'Inventory',
Key: { ProductId: 'P123' },
UpdateExpression: 'SET Stock = Stock - :qty',
ConditionExpression: 'Stock >= :qty',
ExpressionAttributeValues: { ':qty': 1 }
}
},
{
Update: {
TableName: 'CustomerProfiles',
Key: { CustomerId: 'C456' },
UpdateExpression: 'SET OrderCount = OrderCount + :val',
ExpressionAttributeValues: { ':val': 1 }
}
}
]
}));
When to use transactions:
- Financial applications
- Inventory management
- Any scenario requiring ACID guarantees
Further reading: Explore these features in more detail in our dedicated articles on DynamoDB Streams vs Kinesis and DynamoDB Transactions.
Bonus: Monitoring and Performance Analysis
Implementing these best practices is only the beginning. For optimal DynamoDB performance, implement comprehensive monitoring and analytics:
-
Set up CloudWatch Alarms for:
- ProvisionedThroughputExceededExceptions
- ConsumedReadCapacityUnits approaching provisioned capacity
- ConsumedWriteCapacityUnits approaching provisioned capacity
-
Enable CloudWatch Contributor Insights to identify:
- Most accessed items (hot keys)
- Most throttled keys
-
Analyze DynamoDB usage regularly:
- Review slow operations using X-Ray
- Identify unused indexes
- Evaluate capacity utilization trends
-
Implement cost allocation tags to track costs by feature or component
Switching from Dynobase? Try Dynomate
Developers are switching to Dynomate for these key advantages:
Better Multi-Profile Support
- Native AWS SSO integration
- Seamless profile switching
- Multiple accounts in a single view
Developer-Focused Workflow
- Script-like operation collections
- Chain data between operations
- Full AWS API logging for debugging
Team Collaboration
- Git-friendly collection sharing
- No account required for installation
- Local-first data storage for privacy
Privacy & Security
- No account creation required
- 100% local data storage
- No telemetry or usage tracking
Conclusion
DynamoDB offers incredible scalability and performance, but only when used according to these best practices. By designing for your access patterns, choosing the right keys and indexes, managing capacity effectively, and leveraging advanced features, you can build applications that are fast, reliable, and cost-effective.
Remember that DynamoDB optimization is an ongoing process. Continuously monitor your tables, evaluate your access patterns, and refine your data model as your application evolves.
Need help managing and optimizing your DynamoDB implementation? Dynomate provides intuitive tools for visualizing your data, analyzing access patterns, and identifying optimization opportunities without writing complex code. With Dynomate, you can implement all these best practices more easily and ensure your DynamoDB tables are running at peak efficiency.
For more in-depth information on specific DynamoDB topics, check out our related articles: