logo
All blog posts

How Can You Identify Underutilized and Idle Resources in AWS?

Originally Published September, 2025

By:

Juliana Costa Yereb

Senior FinOps Specialist

A Comprehensive FinOps Guide to Cloud Cost Optimization

When it comes to cloud cost optimization, identifying idle and underutilized resources is often the most efficient place to start. It’s widely acknowledged as a low-effort, high-impact opportunity, yet most guidance stops at the obvious. Everyone agrees it’s important, but few actually explain how to do it.

How do you surface these resources in a complex, distributed cloud environment? What signals should you look for? Which tools can help? And once you find them, how do you decide what to shut down, rightsize, or reassign?

In this article, we’ll break down exactly how to identify idle and underutilized cloud resources, where to find them, and what to watch for.

Common Sources of Cloud Waste: What To Look For

Before you can reduce unnecessary cloud spend, you need to surface where inefficiencies exist. The key is to develop a systematic approach that combines usage data, automation, and platform-specific tools to reveal low-value assets that are quietly inflating costs. 

Below are several actionable techniques your teams can use to locate and assess idle or underutilized resources across AWS environments. 

Track instances with consistently low CPU and memory usage

Cloud instances are often oversized compared to what workloads actually need, leading to consistently low CPU or memory or disk utilization. These idle or underutilized instances inflate costs without delivering proportional value. The goal is to surface them systematically and decide whether to rightsize or retire.

To monitor and identify these instances in AWS:

  • Use Amazon CloudWatch to track metrics like CPUUtilization and memory (via the CloudWatch Agent). Instances averaging under 10–15% utilization over a 30-day window are strong candidates for rightsizing.
  • Enable AWS Compute Optimizer, which automatically analyzes historical utilization and recommends smaller, more cost-efficient instance types.
  • Cross-check findings in AWS Cost Explorer to understand which low-utilization instances are driving the highest spend and should be prioritized for action.

Filter for EC2 instances stopped for more than 7 days

Stopped instances can sometimes be misleading. While the compute itself isn’t running, associated resources like attached storage volumes, IP addresses, and even monitoring configurations tied to that instance can keep billing quietly in the background. By auditing older stopped instances, you can uncover and clean up these hidden expenses.

When reviewing stopped instances in AWS:

  • In the EC2 Console, filter for instances with state = stopped and older than 7 days.
  • For each, check associated EBS volumes, provisioned IOPS volumes, and snapshots that may no longer be needed.
  • Look for Elastic IPs that were tied to the instance, these incur charges when detached.
  • Review any linked load balancers or target groups still provisioned after the instance was shut down.
  • Don’t forget CloudWatch detailed monitoring or custom metrics, which can continue to bill even if the instance isn’t running.
  • If the instance was launched from a paid Marketplace AMI, confirm whether licensing costs still apply while stopped.

To streamline this process, use AWS Trusted Advisor or Compute Optimizer to flag idle resources automatically, and run AWS CLI scripts (describe-instances, describe-volumes, describe-addresses) to generate cleanup reports you can action on a regular cadence.

By treating stopped instances as cost “anchors,” you ensure that attached resources are either archived, downsized, or deleted, rather than left behind to quietly drain the budget.

Identify load balancers with zero active connections

Load balancers often remain in place long after the workloads they supported have been decommissioned or migrated. While inactive, they continue to generate hourly charges and, in some cases, add costs for provisioned capacity or monitoring. Left unchecked, these unused resources can steadily inflate your bill.

In AWS, these metrics aren’t visible directly in the billing platform, so you need to rely on the Management Console and CloudWatch. 

Navigate to the EC2 dashboard → Load Balancers (under Load Balancing), select a load balancer, and review its status and traffic metrics. In CloudWatch, monitor the ActiveConnectionCount metric: If it remains at 0 over time, the load balancer is likely no longer in use.

To take this a step further:

  • Use the AWS CLI (describe-load-balancers and describe-target-health) to identify load balancers with no registered targets or no recent traffic history.
  • For Kubernetes users, check whether AWS Load Balancers were automatically provisioned and left behind after services were deleted.

Tools like AWS Trusted Advisor can also highlight idle load balancers. Building a recurring CloudWatch dashboard or automated report ensures these unused resources don’t linger unnoticed.

Detect Lambda functions with zero or minimal invocations

It’s easy for Lambda functions to accumulate over time as applications evolve. Some may become obsolete after a migration, while others are left behind when event sources change. Even though Lambda is billed per request, inactive or forgotten functions can still add indirect costs through associated IAM roles, log groups, and provisioned concurrency.

To identify underutilized functions, start with Amazon CloudWatch Metrics. Check the Invocations metric for each function over the past 60–90 days: Functions with consistently zero activity are strong candidates for cleanup. 

In the AWS Management Console, go to Lambda → Monitor tab for each function to review request counts and error rates. For a broader view, use the AWS CLI (list-functions and get-metric-statistics) or AWS CloudTrail to export activity histories across all functions.

Don’t forget to check linked event sources such as API Gateway routes or S3 triggers. If these are no longer active, the associated Lambda functions can be safely deleted. Where functions are still needed but infrequently invoked, consider whether they should remain in production or be consolidated into broader workflows.

Use VPC Flow Logs to spot inactive network interfaces

Elastic Network Interfaces (ENIs) are often left behind after workloads are reconfigured or instances are terminated. While small individually, each unattached ENI can incur ongoing charges and add unnecessary clutter to your network environment. Over time, this buildup makes it harder to track which resources are active and which are not.

  • To identify idle ENIs, start by enabling VPC Flow Logs for the relevant VPCs and subnets. These logs capture all network traffic at the interface level. 
  • Query the data using Amazon Athena or CloudWatch Logs Insights to find ENIs that have had no inbound or outbound traffic over the last 30 days. 
  • In the EC2 Management Console, you can also filter ENIs by status to check which ones are unattached.

Before deleting, confirm that the interface isn’t required by a managed service such as RDS, Lambda, or Elastic Load Balancing, as these services can automatically create ENIs that are still needed even if they appear inactive. For systematic detection, use AWS Config rules or the AWS CLI (describe-network-interfaces) to generate reports of unattached or idle ENIs.

Regular cleanup not only saves on small but recurring costs, it also improves network hygiene and reduces the chance of misconfigured or forgotten interfaces creating operational risks.

Analyze storage buckets with no recent access patterns

S3 storage is easy to scale, but without oversight, buckets can accumulate old or unused data that still sits in high-cost storage classes. Paying for “hot” storage when objects are rarely or never accessed leads to significant waste, especially across large enterprise accounts.

  • To surface these opportunities, use S3 Storage Lens, which provides organization-wide visibility into storage usage and access trends. 
  • Look for buckets with little to no read activity over the last 30–90 days. Within the S3 Management Console, you can also review bucket metrics to check request counts, retrieval frequency, and size distribution. For custom analysis, export access logs to Athena and run queries on last-access timestamps.
  • Once identified, apply S3 Lifecycle Policies to automatically move infrequently accessed data to cheaper classes such as S3 Infrequent Access, Glacier, or Glacier Deep Archive. For buckets that are truly obsolete, archive or delete them altogether. 
  • Automating this with lifecycle rules ensures you won’t need to revisit the same buckets repeatedly, and it keeps your storage aligned with actual usage needs.

Review Kubernetes nodes with low pod utilization

Kubernetes nodes can sometimes run with low pod densities, meaning you’re paying for additional CPU and RAM capacity you never use. You can apply “bin packing” to each of your nodes to consolidate more pods onto fewer machines, lowering your overall spend.

  • Begin this process by using monitoring tools like Prometheus or Grafana to understand your node utilization trends. Address any underutilized resources by making sure your pods have accurate CPU and memory requests set in their manifests. This will ensure your scheduler places your pods efficiently.
  • Alternatively, you can also implement a Cluster Autoscaler tool. This will detect and remove any idle or underutilized nodes, making sure your cluster sizes always match usage demands.

Spread your focus to non-compute services too

Compute often gets the most attention in cost reviews, but non-compute services can represent a large share of cloud spend and are easier to miss. These include databases, caching services, streaming platforms, analytics tools, and managed storage layers. Because many of them scale independently of compute, unused or oversized resources can quietly persist and keep billing long after workloads have changed.

  • Start with databases like RDS or Aurora. Review CloudWatch metrics for CPU, storage, and connection counts to spot instances running at consistently low utilization. 
  • Move on to caching services such as ElastiCache, where clusters may remain provisioned for applications no longer in use. 
  • For streaming and analytics services like Kinesis or Redshift, check ingestion and query volumes. Low or no activity often indicates the service can be downsized or decommissioned. 
  • Even application-level services like API Gateway or Step Functions should be reviewed for traffic levels and invocation patterns to confirm they’re still actively used.

Because these categories don’t always surface in standard cost dashboards, combine AWS Cost Explorer, CloudWatch metrics, and CLI queries (describe-db-instances, describe-cache-clusters, list-streams, etc.) to build a fuller picture. By expanding the scope of your reviews, you capture waste across the full AWS service portfolio, not just compute.

Flag NAT gateways with minimal or no traffic

NAT gateways are one of those “set and forget” resources that can easily go unnoticed. Once provisioned, they accrue fixed hourly charges regardless of how much traffic passes through. If workloads that once depended on them have been retired or rerouted, an idle NAT gateway can keep costing hundreds of dollars per month without adding any value.

  • To identify waste, review NAT gateway metrics in Amazon CloudWatch, focusing on BytesProcessed and ActiveConnectionCount over the last 30 days. Gateways with little or no activity are strong candidates for removal. 
  • In the VPC console, check which route tables still point to the gateway, this ensures you won’t break connectivity for any dependent workloads when cleaning up. 
  • For automation, use the AWS CLI (describe-nat-gateways) to generate a list of gateways and filter by idle status.

Since NAT gateways are frequently overlooked in standard billing reviews, it’s worth adding them to a recurring audit checklist. 

Focus on commitment waste too

Not all cloud waste comes from idle resources. Reserved Instances (RIs) and Savings Plans (SPs) can deliver discounts of up to 66% compared to On-Demand rates, but only if they’re fully utilized and properly matched to your workloads. Underutilized commitments lock you into spend you don’t need, turning what should be savings into waste.

To stay on top of this: 

  • Use AWS Cost Explorer to review both utilization rates (how much of your purchased commitment is actually consumed) and coverage rates (how much of your total usage is covered by commitments). Low utilization signals overcommitment, while low coverage suggests missed opportunities for savings. Because workloads are dynamic, perfect 100% numbers aren’t the goal — the key is balancing utilization and coverage so your Effective Savings Rate (ESR) trends upward.

Monitoring tools like AWS Trusted Advisor and Compute Optimizer provide some insights, but many organizations find this process too complex and time-consuming to manage manually. 

Automation platforms such as ProsperOps are valuable here. By dynamically adjusting commitment portfolios in real time, they ensure savings plans and RIs align with actual usage, maximizing discounts while minimizing the risk of waste.

For more usage optimization tactics, check out this AWS Cost Optimization list. 

Making Idle Resource Detection a Continuous Practice

Finding idle resources once is easy. The real challenge is preventing them from creeping back in as teams launch new workloads. By embedding detection into your day-to-day FinOps practices, you turn cost cleanup from a one-off project into a continuous safeguard.

Best practices to keep waste in check:

  • Set proactive alerts: Configure CloudWatch or your cloud billing console to trigger alerts when utilization drops below defined thresholds (e.g., CPU under 10–15% for seven days).
  • Run regular audits: Coordinate monthly or quarterly reviews across accounts to uncover forgotten resources before they pile up.
  • Leverage cost allocation tags: Consistent tagging makes it easy to trace spend to owners, projects, or business units and flag charges for inactive initiatives.
  • Automate with advisor tools: Use AWS Trusted Advisor, Azure Advisor, or GCP Recommender to get real-time idle resource suggestions, then feed them into cleanup workflows.
  • Enable anomaly detection: Cloud-native anomaly detection can surface unusual usage or spend patterns that manual reviews miss.
  • Enforce cleanup policies: Use tools like AWS Config to apply rules (e.g., stop resources missing owner tags after 48 hours) and prevent orphaned assets from lingering.

By turning these practices into a repeatable process, your teams can stay ahead of idle resource waste instead of cleaning up reactively.

Take Control of Your Cloud Costs With ProsperOps

Identifying and addressing underutilized resources is a critical part of cloud cost optimization, but it’s only half the equation. Workload optimization reduces waste by using fewer resources, but to maximize savings, it needs to work hand in hand with rate optimization, which ensures you pay the lowest possible price for what you do use. 

To ensure you get the optimal rates for your usage, leverage automation platforms like ProsperOps

ProsperOps is a fully automated, multi-cloud cost optimization platform for AWS, Azure, and Google Cloud. It automates cloud cost optimization by adapting to your usage in real time, eliminating waste, maximizing savings, and ensuring every cloud dollar is spent effectively.

ProsperOps delivers cloud savings-as-a-service, automatically blending discount instruments to maximize your savings while lowering Commitment Lock-in Risk. Using our Autonomous Discount Management platform, we optimize the hyperscaler’s native discount instruments to reduce your cloud spend and help you achieve 45% ESR or more, placing you in the top 5% of FinOps teams.

In addition to autonomous rate optimization, ProsperOps now supports usage optimization through its resource scheduling product, ProsperOps Scheduler. Our customers using Autonomous Discount Management™ (ADM) can now automate resource state changes and integrate seamlessly with ProsperOps Scheduler to reduce waste and lower cloud spend.

Make the most of your cloud spend across AWS, Azure, and Google Cloud with ProsperOps. Schedule your free demo today!

Get Started for Free

Latest from our blog

Request a Free Savings Analysis

3 out of 4 customers see at least a 50% increase in savings.

Get a deeper understanding of your current cloud spend and savings, and find out how much more you can save with ProsperOps!

  • Visualize your savings potential
  • Benchmark performance vs. peers
  • 10-minute setup, no strings attached

Submit the form to request your free cloud savings analysis.

prosperbot