Tools
Tools: Save Money on AWS S3: Automate Storage Tiering with Terraform π°
2026-01-30
0 views
admin
π Understanding S3 Storage Classes ## πΈ The Problem: Forgotten Data Costing You Money ## β¨ The Solution: Terraform Lifecycle Policies ## π― Basic Example: Simple Tiering Strategy ## π Advanced Example: Multiple Rules for Different Prefixes ## π§ Using Intelligent-Tiering for Unknown Access Patterns ## π·οΈ Filtering by Tags ## π Complete Production-Ready Example ## π΅ Calculating Your Savings ## β
Best Practices ## π Monitoring and Validation ## β οΈ Common Pitfalls to Avoid ## π Conclusion ## π Resources If you're storing data in AWS S3, you've probably noticed that storage costs can add up quickly. The good news? You can significantly reduce these costs by automatically moving objects that haven't been accessed in a while to cheaper storage tiers. In this article, I'll show you how to implement S3 lifecycle policies using Terraform to do exactly that. Before diving into the code, let's quickly review the S3 storage classes: Many organizations have S3 buckets filled with objects that were uploaded once and rarely accessed again: π Old application logs
πΎ Backup files
π Historical reports
π Archived uploads
ποΈ Legacy projects Without lifecycle policies, all this data sits in S3 Standard storage, bleeding money every month. Let's create Terraform configurations that automatically transition objects to cheaper storage tiers based on their age. Here's a straightforward lifecycle policy that moves objects through different storage tiers: Different data types need different strategies. Here's how to handle logs, backups, uploads, and temp files: Not sure about access patterns? Let AWS handle it automatically: Create lifecycle rules based on object tags for more granular control: This example includes versioning, noncurrent version management, and cleanup rules: Example: 10 TB of data in S3 Standard that hasn't been accessed in 6 months π° Savings: $220.10/month (95.7% reduction!)
π Annual savings: $2,641.20 Track your lifecycle policies effectiveness: Use AWS Cost Explorer to track storage cost reductions over time. Implementing S3 lifecycle policies with Terraform is one of the easiest ways to reduce your AWS bill. By automatically transitioning infrequently accessed data to cheaper storage tiers, you can save thousands annually while maintaining compliance and data retention. Start with the examples in this article, adjust the transition periods to match your access patterns, and watch your storage costs drop! π° π¬ Have you implemented S3 lifecycle policies? Share your strategies in the comments! Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
resource "aws_s3_bucket" "cost_optimized_bucket" { bucket = "my-cost-optimized-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "transition-old-objects" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
resource "aws_s3_bucket" "cost_optimized_bucket" { bucket = "my-cost-optimized-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "transition-old-objects" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } }
} CODE_BLOCK:
resource "aws_s3_bucket" "cost_optimized_bucket" { bucket = "my-cost-optimized-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "transition-old-objects" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } }
} COMMAND_BLOCK:
resource "aws_s3_bucket" "application_data" { bucket = "my-application-data-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "advanced_lifecycle" { bucket = aws_s3_bucket.application_data.id # Rule for application logs - aggressive archiving rule { id = "archive-logs" status = "Enabled" filter { prefix = "logs/" } transition { days = 7 storage_class = "STANDARD_IA" } transition { days = 30 storage_class = "GLACIER_IR" } transition { days = 90 storage_class = "DEEP_ARCHIVE" } expiration { days = 365 # Delete after 1 year } } # Rule for backups - immediate archival rule { id = "archive-backups" status = "Enabled" filter { prefix = "backups/" } transition { days = 1 storage_class = "GLACIER_IR" } transition { days = 30 storage_class = "DEEP_ARCHIVE" } expiration { days = 2555 # 7 years for compliance } } # Rule for user uploads - moderate archiving rule { id = "transition-user-content" status = "Enabled" filter { prefix = "uploads/" } transition { days = 60 storage_class = "STANDARD_IA" } transition { days = 180 storage_class = "GLACIER_IR" } } # Rule for temporary files - auto-cleanup rule { id = "cleanup-temp-files" status = "Enabled" filter { prefix = "temp/" } expiration { days = 7 } }
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
resource "aws_s3_bucket" "application_data" { bucket = "my-application-data-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "advanced_lifecycle" { bucket = aws_s3_bucket.application_data.id # Rule for application logs - aggressive archiving rule { id = "archive-logs" status = "Enabled" filter { prefix = "logs/" } transition { days = 7 storage_class = "STANDARD_IA" } transition { days = 30 storage_class = "GLACIER_IR" } transition { days = 90 storage_class = "DEEP_ARCHIVE" } expiration { days = 365 # Delete after 1 year } } # Rule for backups - immediate archival rule { id = "archive-backups" status = "Enabled" filter { prefix = "backups/" } transition { days = 1 storage_class = "GLACIER_IR" } transition { days = 30 storage_class = "DEEP_ARCHIVE" } expiration { days = 2555 # 7 years for compliance } } # Rule for user uploads - moderate archiving rule { id = "transition-user-content" status = "Enabled" filter { prefix = "uploads/" } transition { days = 60 storage_class = "STANDARD_IA" } transition { days = 180 storage_class = "GLACIER_IR" } } # Rule for temporary files - auto-cleanup rule { id = "cleanup-temp-files" status = "Enabled" filter { prefix = "temp/" } expiration { days = 7 } }
} COMMAND_BLOCK:
resource "aws_s3_bucket" "application_data" { bucket = "my-application-data-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "advanced_lifecycle" { bucket = aws_s3_bucket.application_data.id # Rule for application logs - aggressive archiving rule { id = "archive-logs" status = "Enabled" filter { prefix = "logs/" } transition { days = 7 storage_class = "STANDARD_IA" } transition { days = 30 storage_class = "GLACIER_IR" } transition { days = 90 storage_class = "DEEP_ARCHIVE" } expiration { days = 365 # Delete after 1 year } } # Rule for backups - immediate archival rule { id = "archive-backups" status = "Enabled" filter { prefix = "backups/" } transition { days = 1 storage_class = "GLACIER_IR" } transition { days = 30 storage_class = "DEEP_ARCHIVE" } expiration { days = 2555 # 7 years for compliance } } # Rule for user uploads - moderate archiving rule { id = "transition-user-content" status = "Enabled" filter { prefix = "uploads/" } transition { days = 60 storage_class = "STANDARD_IA" } transition { days = 180 storage_class = "GLACIER_IR" } } # Rule for temporary files - auto-cleanup rule { id = "cleanup-temp-files" status = "Enabled" filter { prefix = "temp/" } expiration { days = 7 } }
} CODE_BLOCK:
resource "aws_s3_bucket" "intelligent_bucket" { bucket = "my-intelligent-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "intelligent_lifecycle" { bucket = aws_s3_bucket.intelligent_bucket.id rule { id = "enable-intelligent-tiering" status = "Enabled" transition { days = 0 storage_class = "INTELLIGENT_TIERING" } }
} resource "aws_s3_bucket_intelligent_tiering_configuration" "intelligent_config" { bucket = aws_s3_bucket.intelligent_bucket.id name = "EntireBucket" tiering { access_tier = "ARCHIVE_ACCESS" days = 90 } tiering { access_tier = "DEEP_ARCHIVE_ACCESS" days = 180 }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
resource "aws_s3_bucket" "intelligent_bucket" { bucket = "my-intelligent-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "intelligent_lifecycle" { bucket = aws_s3_bucket.intelligent_bucket.id rule { id = "enable-intelligent-tiering" status = "Enabled" transition { days = 0 storage_class = "INTELLIGENT_TIERING" } }
} resource "aws_s3_bucket_intelligent_tiering_configuration" "intelligent_config" { bucket = aws_s3_bucket.intelligent_bucket.id name = "EntireBucket" tiering { access_tier = "ARCHIVE_ACCESS" days = 90 } tiering { access_tier = "DEEP_ARCHIVE_ACCESS" days = 180 }
} CODE_BLOCK:
resource "aws_s3_bucket" "intelligent_bucket" { bucket = "my-intelligent-bucket"
} resource "aws_s3_bucket_lifecycle_configuration" "intelligent_lifecycle" { bucket = aws_s3_bucket.intelligent_bucket.id rule { id = "enable-intelligent-tiering" status = "Enabled" transition { days = 0 storage_class = "INTELLIGENT_TIERING" } }
} resource "aws_s3_bucket_intelligent_tiering_configuration" "intelligent_config" { bucket = aws_s3_bucket.intelligent_bucket.id name = "EntireBucket" tiering { access_tier = "ARCHIVE_ACCESS" days = 90 } tiering { access_tier = "DEEP_ARCHIVE_ACCESS" days = 180 }
} CODE_BLOCK:
resource "aws_s3_bucket_lifecycle_configuration" "tag_based_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "archive-by-tag" status = "Enabled" filter { tag { key = "archive" value = "true" } } transition { days = 1 storage_class = "GLACIER_IR" } } rule { id = "delete-temporary" status = "Enabled" filter { and { prefix = "temp/" tags = { type = "temporary" } } } expiration { days = 30 } }
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
resource "aws_s3_bucket_lifecycle_configuration" "tag_based_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "archive-by-tag" status = "Enabled" filter { tag { key = "archive" value = "true" } } transition { days = 1 storage_class = "GLACIER_IR" } } rule { id = "delete-temporary" status = "Enabled" filter { and { prefix = "temp/" tags = { type = "temporary" } } } expiration { days = 30 } }
} CODE_BLOCK:
resource "aws_s3_bucket_lifecycle_configuration" "tag_based_lifecycle" { bucket = aws_s3_bucket.cost_optimized_bucket.id rule { id = "archive-by-tag" status = "Enabled" filter { tag { key = "archive" value = "true" } } transition { days = 1 storage_class = "GLACIER_IR" } } rule { id = "delete-temporary" status = "Enabled" filter { and { prefix = "temp/" tags = { type = "temporary" } } } expiration { days = 30 } }
} COMMAND_BLOCK:
resource "aws_s3_bucket" "production_data" { bucket = "my-production-data-${var.environment}" tags = { Environment = var.environment ManagedBy = "Terraform" Purpose = "Cost-optimized storage" }
} resource "aws_s3_bucket_versioning" "production_versioning" { bucket = aws_s3_bucket.production_data.id versioning_configuration { status = "Enabled" }
} resource "aws_s3_bucket_lifecycle_configuration" "production_lifecycle" { bucket = aws_s3_bucket.production_data.id # Current version lifecycle rule { id = "transition-current-versions" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } } # Noncurrent version lifecycle rule { id = "transition-noncurrent-versions" status = "Enabled" noncurrent_version_transition { noncurrent_days = 7 storage_class = "STANDARD_IA" } noncurrent_version_transition { noncurrent_days = 30 storage_class = "GLACIER_IR" } noncurrent_version_expiration { noncurrent_days = 90 } } # Clean up incomplete multipart uploads rule { id = "cleanup-incomplete-uploads" status = "Enabled" abort_incomplete_multipart_upload { days_after_initiation = 7 } } # Clean up delete markers rule { id = "cleanup-delete-markers" status = "Enabled" expiration { expired_object_delete_marker = true } }
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
resource "aws_s3_bucket" "production_data" { bucket = "my-production-data-${var.environment}" tags = { Environment = var.environment ManagedBy = "Terraform" Purpose = "Cost-optimized storage" }
} resource "aws_s3_bucket_versioning" "production_versioning" { bucket = aws_s3_bucket.production_data.id versioning_configuration { status = "Enabled" }
} resource "aws_s3_bucket_lifecycle_configuration" "production_lifecycle" { bucket = aws_s3_bucket.production_data.id # Current version lifecycle rule { id = "transition-current-versions" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } } # Noncurrent version lifecycle rule { id = "transition-noncurrent-versions" status = "Enabled" noncurrent_version_transition { noncurrent_days = 7 storage_class = "STANDARD_IA" } noncurrent_version_transition { noncurrent_days = 30 storage_class = "GLACIER_IR" } noncurrent_version_expiration { noncurrent_days = 90 } } # Clean up incomplete multipart uploads rule { id = "cleanup-incomplete-uploads" status = "Enabled" abort_incomplete_multipart_upload { days_after_initiation = 7 } } # Clean up delete markers rule { id = "cleanup-delete-markers" status = "Enabled" expiration { expired_object_delete_marker = true } }
} COMMAND_BLOCK:
resource "aws_s3_bucket" "production_data" { bucket = "my-production-data-${var.environment}" tags = { Environment = var.environment ManagedBy = "Terraform" Purpose = "Cost-optimized storage" }
} resource "aws_s3_bucket_versioning" "production_versioning" { bucket = aws_s3_bucket.production_data.id versioning_configuration { status = "Enabled" }
} resource "aws_s3_bucket_lifecycle_configuration" "production_lifecycle" { bucket = aws_s3_bucket.production_data.id # Current version lifecycle rule { id = "transition-current-versions" status = "Enabled" transition { days = 30 storage_class = "STANDARD_IA" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } } # Noncurrent version lifecycle rule { id = "transition-noncurrent-versions" status = "Enabled" noncurrent_version_transition { noncurrent_days = 7 storage_class = "STANDARD_IA" } noncurrent_version_transition { noncurrent_days = 30 storage_class = "GLACIER_IR" } noncurrent_version_expiration { noncurrent_days = 90 } } # Clean up incomplete multipart uploads rule { id = "cleanup-incomplete-uploads" status = "Enabled" abort_incomplete_multipart_upload { days_after_initiation = 7 } } # Clean up delete markers rule { id = "cleanup-delete-markers" status = "Enabled" expiration { expired_object_delete_marker = true } }
} COMMAND_BLOCK:
# Check lifecycle rules
aws s3api get-bucket-lifecycle-configuration --bucket your-bucket-name # View storage metrics
aws s3api list-objects-v2 --bucket your-bucket-name \ --query 'Contents[*].[Key,StorageClass,LastModified]' --output table Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# Check lifecycle rules
aws s3api get-bucket-lifecycle-configuration --bucket your-bucket-name # View storage metrics
aws s3api list-objects-v2 --bucket your-bucket-name \ --query 'Contents[*].[Key,StorageClass,LastModified]' --output table COMMAND_BLOCK:
# Check lifecycle rules
aws s3api get-bucket-lifecycle-configuration --bucket your-bucket-name # View storage metrics
aws s3api list-objects-v2 --bucket your-bucket-name \ --query 'Contents[*].[Key,StorageClass,LastModified]' --output table - π₯ S3 Standard - Frequently accessed data (~$0.023/GB)
- π‘οΈ S3 Standard-IA - Infrequent access (~$0.0125/GB) - 46% savings
- π§ S3 Intelligent-Tiering - Auto-optimizes based on access patterns
- π§ S3 Glacier Instant Retrieval - Archive with instant access (~$0.004/GB)
- βοΈ S3 Glacier Deep Archive - Long-term archive (~$0.00099/GB) - 95%+ savings - π
Day 30 β Standard-IA
- π
Day 90 β Glacier Instant Retrieval
- π
Day 180 β Deep Archive - 10 TB Γ $0.023/GB = $230/month - 10 TB in Glacier Deep Archive Γ $0.00099/GB = $9.90/month - π― Start Conservative - Begin with longer transition periods and adjust based on data
- π Monitor Access Patterns - Use S3 Storage Lens or CloudWatch
- π§ͺ Test First - Apply policies to a test bucket before production
- πΈ Consider Retrieval Costs - Glacier has retrieval fees
- π§ Use Intelligent-Tiering - When uncertain, let AWS optimize
- ποΈ Clean Up Old Versions - Don't forget noncurrent versions in versioned buckets - β© Too Aggressive Transitions - Moving data to Glacier too quickly incurs retrieval costs
- β±οΈ Minimum Storage Durations - Standard-IA and Glacier have 30-90 day minimums
- π¦ Small Objects - Objects <128 KB are charged for 128 KB in IA/Glacier
- π§Ή Incomplete Uploads - Clean these up to avoid unnecessary costs
- π Noncurrent Versions - Don't forget old versions in versioned buckets - AWS S3 Storage Classes
- Terraform S3 Lifecycle Docs
- S3 Lifecycle Best Practices
how-totutorialguidedev.toaiterraform