Documentation
-
Description: Ensure root account does not have access key.
Explanation:
The root account is the most privileged user in an AWS account.
AWS Access Keys provide programmatic access to a given AWS account using the respective user’s highest privilege.
It is recommended that all access keys associated with the root account be removed to protect the AWS account. -
Description: Ensure Multi-Factor Authentication (MFA) is enabled for root account
Explanation:
The root account is the most privileged user in an AWS account.
MFA adds additional layer of protection in addition to root accounts Username/Password combination.
With MFA option enabled, the root user will be prompted to enter MFA Authentication Code from their MFA Device as 2nd layer of authentication in addition to their login credentials (Username/Password)
Login Credentials = What you know.
MFA Code = What you have. -
Description: Avoid using root account
Explanation:
The root account is the most privileged user in an AWS account. It has unrestricted access to all services within an AWS account.
Always avoid using root account unless absolutely needed (tasks that are restricted to root account privileges). -
Description: Maintain current valid contact details in each AWS Account
Explanation:
AWS recommends that contact email and telephone details for each AWS accounts are current and mapped to more than one individual in your organization.
An AWS account supports multiple contact details and AWS will use this information to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team.
More than one individual contact details are recommended, as circumstances may arise where a primary contact person is unavailable. Email contact details should point to a distribution list which forwards email to multiple individuals within the organization. -
Description: Ensure your ‘Alternate Contacts’ for security are correct in the AWS account settings page of your AWS account.
Explanation:
AWS uses the security contact information to inform you of critical service events such as security issues.
In addition to the primary contact information, you must enter the security contacts:
Security: When you have notifications from the AWS Abuse team for potentially fraudulent activity on your AWS account. Any notification related to security.
As a best practice, avoid using contact information for individuals, and instead use group email addresses and shared company phone numbers.
Keeping your security contact information up to date ensure timely delivery of critical information to the relevant stakeholders. Incorrect security contact information may result in communications delays that could impact your organization security. -
Description: Ensure hardware based Multi-Factor Authentication (MFA) is enabled for root account
Explanation:
The root user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS console, they will be prompted for their username and password as well as for an authentication code from their MFA device.
For Level 2, it is recommended that the root user account can be protected with a hardware MFA.
It is recommended that the device which is used for virtual MFA is NOT a personal device, but rather a dedicated device (phone or tablet). That can be managed to be kept charged and secured. It reduces the risks of losing access to the MFA code.
IAM root user account for us-gov cloud regions does not have console access. This control is not applicable for us-gov cloud regions.
A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the smartphone or tablet on which a virtual MFA resides.
Using hardware MFA for many AWS accounts can create a logistical device management issue. In such cases, consider only implementing this Level 2 recommendation selectively to the highest secured AWS accounts and the Level 1 recommendation applied to the remaining accounts. -
Description: Ensure that the AWS Backup Recovery Points are encrypted in order to protect from data exfiltration.
Explanation:
A backup, or recovery point, is the saved content of a resource. Each backup exists as a recovery point, which contains the content of a resource when it was backed up. These are created by backup plans or by on-demand backups.
This control checks if an AWS Backup recovery point is encrypted at rest. The control fails if the recovery point isn’t encrypted at rest.
Encrypting the backup recovery points adds an extra layer of protection against unauthorized access. Encryption is a best practice to protect the confidentiality, integrity, and security of backup data. -
Description: Ensure that the AWS Backup Recovery Points are encrypted using Customer managed keys in order to protect from data exfiltration.
Explanation:
Ensure that your Amazon Backup vaults are using AWS KMS Customer Managed Keys instead of default AWS managed-keys (i.e. default encryption keys) for encrypting your backup data.
A backup, or recovery point, is the saved content of a resource. Each backup exists as a recovery point, which contains the content of a resource when it was backed up. These are created by backup plans or by on-demand backups.
This control checks if an AWS Backup recovery point is encrypted using Customer managed keys at rest. The control fails if the recovery point isn’t encrypted using CMK at rest.
When you use your own AWS KMS Customer Master Keys (CMKs) to protect the backups created with Amazon Backup service, you have full control over who can use the encryption keys to access your backups. Amazon Key Management Service (KMS) service allows you to easily create, rotate, disable and audit the Customer Master Keys used to encrypt AWS Backup data. -
Description: Ensure that an Amazon Backup vault access policy is configured to prevent the deletion (accidentally or intentionally) of AWS backups in the backup vault. A backup vault is a container used to organize AWS backups.
Explanation:
Ensure that an Amazon Backup vault access policy is configured to prevent the deletion (accidentally or intentionally) of AWS backups in the backup vault. A backup vault is a container used to organize AWS backups.
The ability to delete recovery points (i.e. backups) stored within your AWS Backup vaults is determined by the permissions that you grant to your users. You can enforce deletion protection and restrict deleting recovery points by configuring the resource-based access policies associated with your vaults. -
Description: List of non-system tag keys that the evaluated resource must contain. Tag keys are case sensitive.
Explanation:
This control checks whether an AWS Backup recovery point has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the recovery point doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the recovery point isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag. -
Description: List of non-system tag keys that the evaluated resource must contain. Tag keys are case sensitive.
Explanation:
This control checks whether an AWS Backup Vault has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the AWS Backup Vault doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the AWS Backup Vault isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag. -
Description: List of non-system tag keys that the evaluated resource must contain. Tag keys are case sensitive.
Explanation:
This control checks whether an AWS Backup Report Plan has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the AWS Backup Report Plan doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the AWS Backup Report Plan isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag. -
Description: List of non-system tag keys that the evaluated resource must contain. Tag keys are case sensitive.
Explanation:
This control checks whether an AWS Backup “Backup Plans” has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the AWS Backup “Backup Plans” doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the AWS Backup “Backup Plans” isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag. -
Description: This control checks whether HTTP to HTTPS redirection is configured on all HTTP listeners of Application Load Balancers. The control fails if any of the HTTP listeners of Application Load Balancers do not have HTTP to HTTPS redirection configured.
Explanation:
A listener is a process that uses the configured protocol and port to check for connection requests. Listeners support both the HTTP and HTTPS protocols.
You can use an HTTPS listener to offload the work of encryption and decryption to your load balancer. To enforce encryption in transit, you should use redirect actions with Application Load Balancers to redirect client HTTP requests to an HTTPS request on port 443. -
Description: This control checks whether the Classic Load Balancer uses HTTPS/SSL certificates provided by AWS Certificate Manager (ACM). The control fails if the Classic Load Balancer configured with HTTPS/SSL listener does not use a certificate provided by ACM.
Explanation:
A listener is a process that uses the configured protocol and port to check for connection requests. Listeners support both the HTTP and HTTPS protocols.
You can use an HTTPS listener to offload the work of encryption and decryption to your load balancer. To enforce encryption in transit, you should use redirect actions with Application Load Balancers to redirect client HTTP requests to an HTTPS request on port 443. -
AWS_ELB_103 – aws-classic-load-balancer-listeners-should-be-configured-with-https-or-tls-termination
Description: This control checks whether your Classic Load Balancer listeners are configured with HTTPS or TLS protocol for front-end (client to load balancer) connections. The control is applicable if a Classic Load Balancer has listeners. If your Classic Load Balancer does not have a listener configured, then the control does not report any findings.
Explanation:
The control passes if the Classic Load Balancer listeners are configured with TLS or HTTPS for front-end connections.
The control fails if the listener is not configured with TLS or HTTPS for front-end connections.
Before you start to use a load balancer, you must add one or more listeners. A listener is a process that uses the configured protocol and port to check for connection requests. Listeners can support both HTTP and HTTPS/TLS protocols. You should always use an HTTPS or TLS listener, so that the load balancer does the work of encryption and decryption in transit. -
Description: This control evaluates AWS Application Load Balancers to ensure they are configured to drop invalid HTTP headers. The control fails if the value of routing.http.drop_invalid_header_fields.enabled is set to false.
Explanation:
By default, Application Load Balancers are not configured to drop invalid HTTP header values. Removing these header values prevents HTTP desync attacks. -
Description: This control checks whether the Application Load Balancer and the Classic Load Balancerhave logging enabled. The control fails if access_logs.s3.enabled is false.
Explanation:
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. -
Description: This control checks whether an Application, Gateway, or Network Load Balancer has deletion protection enabled. The control fails if deletion protection is disabled.
Explanation:
Enable deletion protection to protect your Application, Gateway, or Network Load Balancer from deletion. -
Description: This control checks whether Classic Load Balancers have connection draining enabled.
Explanation:
Enabling connection draining on Classic Load Balancers ensures that the load balancer stops sending requests to instances that are de-registering or unhealthy. It keeps the existing connections open. This is particularly useful for instances in Auto Scaling groups, to ensure that connections aren’t severed abruptly.
To ensure that a Classic Load Balancer stops sending requests to instances that are de-registering or unhealthy, while keeping the existing connections open, use connection draining. This enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy.
When you enable connection draining, you can specify a maximum time for the load balancer to keep connections alive before reporting the instance as de-registered. The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds). When the maximum time limit is reached, the load balancer forcibly closes connections to the de-registering instance.
While in-flight requests are being served, the load balancer reports the state of a de-registering instance as InService: Instance deregistration currently in progress. When the de-registering instance is finished serving all in-flight requests, or when the maximum timeout limit is reached, the load balancer reports the instance state as OutOfService: Instance is not currently registered with the LoadBalancer.
If an instance becomes unhealthy, the load balancer reports the instance state as OutOfService. If there are in-flight requests made to the unhealthy instance, they are completed. The maximum timeout limit does not apply to connections to unhealthy instances.
If your instances are part of an Auto Scaling group and connection draining is enabled for your load balancer, Auto Scaling waits for the in-flight requests to complete, or for the maximum timeout to expire, before terminating instances due to a scaling event or health check replacement.
You can disable connection draining if you want your load balancer to immediately close connections to the instances that are de-registering or have become unhealthy. When connection draining is disabled, any in-flight requests made to instances that are de-registering or unhealthy are not completed. -
Description: This control checks whether your Classic Load Balancer HTTPS/SSL listeners use the predefined policy ELBSecurityPolicy-TLS-1-2-2017-01. The control fails if the Classic Load Balancer HTTPS/SSL listeners do not use ELBSecurityPolicy-TLS-1-2-2017-01
Explanation:
A security policy is a combination of SSL protocols, ciphers, and the Server Order Preference option. Predefined policies control the ciphers, protocols, and preference orders to support during SSL negotiations between a client and load balancer.
Using ELBSecurityPolicy-TLS-1-2-2017-01 can help you to meet compliance and security standards that require you to disable specific versions of SSL and TLS. -
Description: This control checks if cross-zone load balancing is enabled for the Classic Load Balancers (CLBs). The control fails if cross-zone load balancing is not enabled for a CLB.
Explanation:
A load balancer node distributes traffic only across the registered targets in its Availability Zone. When cross-zone load balancing is disabled, each load balancer node distributes traffic only across the registered targets in its Availability Zone. If the number of registered targets is not same across the Availability Zones, traffic wont be distributed evenly and the instances in one zone may end up over utilized compared to the instances in another zone. With cross-zone load balancing enabled, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. -
Description: This control checks whether a Classic Load Balancer has been configured to span at least the specified number of Availability Zones (AZs). The control fails if the Classic Load Balancer does not span at least the specified number of AZs. Unless you provide a custom parameter value for the minimum number of AZs, Security Hub uses a default value of two AZs.
Explanation:
A Classic Load Balancer can be set up to distribute incoming requests across Amazon EC2 instances in a single Availability Zone or multiple Availability Zones. A Classic Load Balancer that does not span multiple Availability Zones is unable to redirect traffic to targets in another Availability Zone if the sole configured Availability Zone becomes unavailable.
When you add a subnet to your load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones. For load balancers in a VPC, we recommend that you add one subnet per Availability Zone for at least two Availability Zones. This improves the availability of your load balancer. Note that you can modify the subnets for your load balancer at any time. -
Description: This control checks whether an Application Load Balancer is configured with defensive or strictest desync mitigation mode. The control fails if an Application Load Balancer is not configured with defensive or strictest desync mitigation mode.
Explanation:
HTTP Desync issues can lead to request smuggling and make applications vulnerable to request queue or cache poisoning. In turn, these vulnerabilities can lead to credential stuffing or execution of unauthorized commands. Application Load Balancers configured with defensive or strictest desync mitigation mode protect your application from security issues that may be caused by HTTP Desync.
Desync mitigation mode protects your application from issues due to HTTP desync. The load balancer classifies each request based on its threat level, allows safe requests, and then mitigates risk as specified by the mitigation mode that you specify. The desync mitigation modes are monitor, defensive, and strictest. The default is the defensive mode, which provides durable mitigation against HTTP desync while maintaining the availability of your application. You can switch to strictest mode to ensure that your application receives only requests that comply with RFC 7230.
The http_desync_guardian library analyzes HTTP requests to prevent HTTP desync attacks -
Description: Ensure actions ‘kms:Decrypt’ and ‘kms:ReEncryptFrom’ are not allowed for all keys in AWS IAM inline Policy attached to IAM group, role and user to maintain the principle of “Separation of Duties”
Explanation:
With AWS KMS, you control who can use your KMS keys and gain access to your encrypted data. IAM policies define which actions an identity (user, group, or role) can perform on which resources. Following security best practices, AWS recommends that you allow least privilege. In other words, you should grant to identities only the permissions they need and only for keys that are required to perform a task. Otherwise, the user might use keys that are not appropriate for your data.
Instead of granting permission for all keys, determine the minimum set of keys that users need to access encrypted data. Then design policies that allow the users to use only those keys. For example, do not allow kms:Decrypt permission on all KMS keys. Instead, allow the permission only on specific keys in a specific Region for your account. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. -
Description: Ensure AWS Secrets Manager secret have automatic rotation enabled
Explanation:
Secrets Manager rotation is the automatic process that periodically change your secrets data to make it more difficult for an attacker to access the services and resources secured with these secrets. With Amazon Secrets Manager you don’t have to manually change the secret and update it on all of your clients. Instead, the Secrets Manager service uses an AWS Lambda function to perform for you all of the steps required for rotation, on a regular schedule (predefined or custom) -
Description: Ensure AWS Secrets Manager unused Secrets are removed
Explanation:
Deleting unused secrets is as important as rotating secrets. Unused secrets can be abused by their former users, who no longer need access to these secrets. Also, as more users get access to a secret, someone might have mishandled and leaked it to an unauthorized entity, which increases the risk of abuse. Deleting unused secrets helps revoke secret access from users who no longer need it. It also helps to reduce the cost of using Secrets Manager. Therefore, it is essential to routinely delete unused secrets. -
Description: Ensure Amazon Secrets Manager secrets are encrypted with Amazon KMS Customer Master Keys (CMKs) instead of default encryption keys
Explanation:
Secrets Manager uses envelope encryption with AWS KMS keys and data keys to protect each secret value. Whenever the secret value secret changes, Secrets Manager generates a new data key to protect it. The data key is encrypted under a KMS key and stored in the metadata of the secret. To decrypt the secret, Secrets Manager first decrypts the encrypted data key using the KMS key in AWS KMS. -
Description: This control checks whether an Elastic Load Balancer V2 (Application, Network, or Gateway Load Balancer) has registered instances from at least the specified number of Availability Zones (AZs). The control fails if an Elastic Load Balancer V2 doesn’t have instances registered in at least the specified number of AZs. Unless you provide a custom parameter value for the minimum number of AZs, Security Hub uses a default value of two AZs.
Explanation:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It is recommended to configure at least two availability zones to ensure availability of services, as the Elastic Load Balancer will be able to direct traffic to another availability zone if one becomes unavailable. Having multiple availability zones configured will help eliminate having a single point of failure for the application. -
Description: This control checks whether an Classic Load Balancer is configured with defensive or strictest desync mitigation mode. The control fails if an Classic Load Balancer is not configured with defensive or strictest desync mitigation mode.
Explanation:
HTTP Desync issues can lead to request smuggling and make applications vulnerable to request queue or cache poisoning. In turn, these vulnerabilities can lead to credential stuffing or execution of unauthorized commands. Classic Load Balancers configured with defensive or strictest desync mitigation mode protect your application from security issues that may be caused by HTTP Desync.
Desync mitigation mode protects your application from issues due to HTTP desync. The load balancer classifies each request based on its threat level, allows safe requests, and then mitigates risk as specified by the mitigation mode that you specify. The desync mitigation modes are monitor, defensive, and strictest. The default is the defensive mode, which provides durable mitigation against HTTP desync while maintaining the availability of your application. You can switch to strictest mode to ensure that your application receives only requests that comply with RFC 7230.
The http_desync_guardian library analyzes HTTP requests to prevent HTTP desync attacks -
Description: This control checks whether an Application Load Balancer is associated with an AWS WAF Classic or AWS WAF web access control list (web ACL). The control fails if the Enabled field for the AWS WAF configuration is set to false.
Explanation:
AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. With AWS WAF, you can configure a web ACL, which is a set of rules that allow, block, or count web requests based on customizable web security rules and conditions that you define. We recommend associating your Application Load Balancer with an AWS WAF web ACL to help protect it from malicious attacks. -
Description: Ensure actions ‘kms:Decrypt’ and ‘kms:ReEncryptFrom’ are not allowed for all keys in AWS IAM Policy to maintain the principle of “Separation of Duties”
Explanation:
With AWS KMS, you control who can use your KMS keys and gain access to your encrypted data. IAM policies define which actions an identity (user, group, or role) can perform on which resources. Following security best practices, AWS recommends that you allow least privilege. In other words, you should grant to identities only the kms:Decrypt or kms:ReEncryptFrom permissions and only for the keys that are required to perform a task. Otherwise, the user might use keys that are not appropriate for your data.
Using ‘kms:Decrypt’ and ‘kms:ReEncryptFrom’ in the action for all keys violates the principle of least privilege. This can allow misuse of KMS keys leading to unauthorized access and sensitive data exposure.
Instead of granting permissions for all keys, determine the minimum set of keys that users need to access encrypted data. Then design policies that allow users to use only those keys. For example, do not allow kms:Decrypt permission on all KMS keys. Instead, allow kms:Decrypt only on keys in a particular Region for your account. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. -
Description: ECR repositories should have at least one lifecycle policy configured
Explanation:
This control checks whether an Amazon ECR repository has at least one lifecycle policy configured. This control fails if an ECR repository does not have any lifecycle policies configured.
Amazon ECR lifecycle policies enable you to specify the lifecycle management of images in a repository. By configuring lifecycle policies, you can automate the cleanup of unused images and the expiration of images based on age or count. Automating these tasks can help you avoid unintentionally using outdated images in your repository. -
Description: ECR private repositories should have tag immutability configured
Explanation:
This control checks whether a private ECR repository has tag immutability enabled. This control fails if a private ECR repository has tag immutability disabled. This rule passes if tag immutability is enabled and has the value IMMUTABLE.
Amazon ECR Tag Immutability enables customers to rely on the descriptive tags of an image as a reliable mechanism to track and uniquely identify images. An immutable tag is static, which means each tag refers to a unique image. This improves reliability and scalability as the use of a static tag will always result in the same image being deployed. When configured, tag immutability prevents the tags from being overridden, which reduces the attack surface. -
Description: Check if AWS GuardDuty is enable for AWS Accounts
Explanation:
AWS recommends the account owners to enable AWS GuardDuty to ensure continuous monitoring of provisioned AWS workloads for malicious activities like API requests from harmful IP addresses and unauthorized data S3 access. -
Description: Ensure that your AWS Elastic Block Store (EBS) volume snapshots are not public (i.e. publicly shared with other AWS accounts) in order to avoid exposing personal and sensitive data.
Explanation:
EBS is a block storage service provided by AWS, used to store persistent data. Amazon EBS is suitable for EC2 instances by providing block-level storage volumes. AWS enables creating multiple snapshots of these volumes. A Snapshot is basically an incremental backup created for the data stored in EBS volumes.
We recommend your EBS snapshots are not publicly accessible. Public AWS EBS snapshot means that data which is backed up in that particular snapshot is accessible to all other AWS accounts. This means the other person can not only access and copy your data but can also create a volume out of it. -
Description: Verify that cloudtrail log file validation is enabled.
Explanation:
CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log.
The digitally signed digest file is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them.
Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time. -
Description: Ensure that CloudTrail events are sent to Cloudwatch logs.
Explanation:
AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies.
AWS CloudTrail records the identity of the API caller, time of the API call, source IP address of the API caller, request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery. In addition to capturing CloudTrail logs within a specified S3 bucket for long-term analysis, real-time analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group.
Note: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alerted on. CloudWatch Logs is a native way to accomplish this using AWS services. -
Description: To monitor real-time API calls, CloudTrail Logs can be directed to CloudWatch Logs with corresponding metric filters and alarms. It is advisable to set up a metric filter and alarm to detect any modifications made to CloudTrail’s configurations.
Explanation:
Amazon CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, Command Line Interface (CLI), AWS SDKs and APIs. This event history feature simplifies security auditing, resource change tracking and troubleshooting. You can identify who or what took which action, what resources were acted upon, when an event occurred and other details that can help you analyze and respond to any activity within your Amazon Web Services account.
As a security best practice, you need to be aware of all configuration changes performed at the CloudTrail level. The activity detected by Cloud Conformity RTMA could be, for example, a user action initiated through AWS Management Console or an AWS API request initiated programmatically using AWS CLI or SDK, that is triggering any of the CloudTrail operational events. By monitoring changes made to CloudTrail’s configuration, continuous visibility into activities carried out in the AWS account can be ensured.
Create CloudWatch metric filters and alarms to detect changes in the CloudTrail configuration. -
Description: Ensure ECR image scan on push is enabled
Explanation:
AWS recommends the account owners to Ensure that all your Amazon ECR container images are automatically scanned for security vulnerabilities and expenses after being pushed to a repository. Scan on Push for Amazon ECR is an automated vulnerability assessment feature that helps you improve the security of your ECR container images by scanning them for a broad range of Operating System (OS) vulnerabilities after being pushed to an ECR repository. -
Description: Ensure that IAM policy enforces user password expires in 90 days
Explanation:
IAM password policies can require passwords to be rotated or expired after a given number of days.
Cloudcatcher recommends that the password policy expire passwords after 90 days or less.
Reducing the lifetime of a password by enforcing regular password changes increases account resilience towards:
Brute force attack;
Passwords being stolen or compromised, sometimes without your knowledge;
Web filters and proxy servers intercepting and recording traffic, including encrypted data;
Use of the same user password across work, email, and personal systems; and
End user workstations compromised by a keystroke logger. -
Description: Ensure the password policy enforces users not to reuse their previous passwords
Explanation:
IAM password policies can prevent the reuse of a given password by the same user. You can specify a minimum number of 1 and a maximum number of 24 previous passwords that can’t be repeated.
Your IAM password policy must prevent reuse of passwords. Each password should be brand new to increase security, especially from a brute force attack. -
Description: This rule verifies that an IAM role with “AWSSupportAccess” privilege has been created.
Explanation:
AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role to allow authorized users to manage incidents with AWS Support.
By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support.
All AWS Support plans include an unlimited number of account and billing support cases, with no long-term contracts. Support billing calculations are performed on a per-account basis for all plans. Enterprise Support plan customers have the option to include multiple enabled accounts in an aggregated monthly billing calculation. Monthly charges for the Business and Enterprise support plans are based on each month’s AWS usage charges, subject to a monthly minimum, billed in advance.
IamPolicy where name=’AWSSupportAccess’ should not have users isEmpty() and roles isEmpty() and groups isEmpty() -
Description: Ensure that bucket access logging is enabled on the s3 bucket where cloudtrail logs are stored.
Explanation:
AWS S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.
By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows. -
Description: Ensure that metric filter and alarm is configured on CloudWatch logs to alert any unauthorized API calls
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls.
Monitoring unauthorized API calls will help reveal application errors and may reduce time to detect malicious activity. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any login to AWS management console without MFA authentication.
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA).
Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any login attempts using “root” account
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for root login attempts.
Monitoring for root account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert when any changes were made to IAM policies.
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies.
Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert when any changes were made to cloudtrail configuration
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail’s configurations.
Monitoring changes to CloudTrail’s configuration will help ensure sustained visibility to activities performed in the AWS account. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any failed management console authentication attempt.
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts.
Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any disabling and scheduled deletion of Customer Managed Keys (SSE-KMS CMKs).
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion.
Data encrypted with disabled or deleted keys will no longer be accessible. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any S3 bucket policy changes
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies.
Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to AWS Config configuration settings
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail’s configurations
Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to security groups within a VPC of that region.
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups.
Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to network access control list (NACL) within a VPC
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs.
Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to network gateway within a VPC
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways.
Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to all route tables within a VPC
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Route tables are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to route tables.
Monitoring changes to route tables will help ensure that all ingress/egress traffic are routed properly without any route leaks. -
Description: Ensure that metric filter and alarm is configured in CloudWatch logs to alert any changes to VPC settings within a region.
Explanation:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs across multiple unauthorized accounts.
Monitoring changes to VPC configuration changes will help ensure integrity of network. -
Description: Ensure that your Amazon Relational Database Service (RDS) instances have Deletion Protection feature enabled in order to protect them from being accidentally deleted.
Explanation:
This control is intended for RDS DB instances. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings are not useful, then you can suppress them.
With Deletion Protection safety feature enabled, you have the guarantee that your Amazon RDS database instances cannot be accidentally deleted and make sure that your data remains safe. Deletion protection prevents any existing or new RDS database instances from being deleted by users via the AWS Management Console, the CLI or the API calls, unless the feature is explicitly disabled. -
Description: Ensure that the RDS clusters are not using default RDS ports such as MySQL/Aurora port 3306, SQL Server port 1433, and PostgreSQL port 5432.
Explanation:
This control checks whether the RDS cluster uses a port other than the default port of the database engine.
If you use a known port to deploy an RDS cluster, an attacker can guess information about the cluster or instance. The attacker can use this information in conjunction with other information to connect to an RDS cluster or instance or gain additional information about your application.
When you change the port, you must also update the existing connection strings that were used to connect to the old port. You should also check the security group of the DB instance to ensure that it includes an ingress rule that allows connectivity on the new port. -
Description: Ensure that the RDS clusters are not using default RDS ports such as MySQL/Aurora port 3306, SQL Server port 1433, and PostgreSQL port 5432.
Explanation:
This control checks whether the RDS instance uses a port other than the default port of the database engine.
If you use a known port to deploy an RDS cluster or instance, an attacker can guess information about the instance. The attacker can use this information in conjunction with other information to connect to an RDS cluster or instance or gain additional information about your application.
When you change the port, you must also update the existing connection strings that were used to connect to the old port. You should also check the security group of the DB instance to ensure that it includes an ingress rule that allows connectivity on the new port. -
Description: S3 Buckets should be protected from ransomware attacks by configuring versioning and MFA Delete. Doing so will disallow immediate bucket content removal, data encryption, or any other harmful modifications.
Explanation:
Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted.
Disabled versioning is also considered a violation by this rule. The reason for that is that the attacker may make the bucket vulnerable by disabling object versioning with the s3:PutBucketVersioning permission.
Once MFA Delete is enable on your sensitive and classified S3 bucket it requires the user to have two forms of authentication. -
Description: Ensure that Web Application Firewall (WAF) logging is enabled.
Explanation:
AWS recommends the account owners to enable Logging for Web Access Control Lists. It ensure that your Amazon WAF Web Access Control Lists (Web ACLs) are configured to capture information about all incoming requests. Amazon WAF is a web application firewall service that lets you monitor web requests that are forwarded to Amazon API Gateway APIs, Amazon CloudFront distributions, or Application Load Balancers in order to help protect them from attacks. -
Description: Ensure ECS task definition variables do not expose secrets
Explanation:
AWS recommends the account owners to remove secrets from unencrypted places, especially if they can be easily accessed, to reduce the risk of exposing data to third parties. -
Description: Ensure ECS services does not have public IP addresses assigned
Explanation:
Amazon ECS services should not be publicly accessible, as this may allow unintended access to your container application servers. -
Description: Ensure that the Customer managed keys in KMS are NOT scheduled for deletion
Explanation:
KMS keys cannot be recovered once deleted. Data encrypted under a KMS key is also permanently unrecoverable if the KMS key is deleted. If meaningful data has been encrypted under a KMS key scheduled for deletion, consider decrypting the data or re-encrypting the data under a new KMS key unless you are intentionally performing a cryptographic erasure.
When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to reverse the deletion, if it was scheduled in error. The default waiting period is 30 days, but it can be reduced to as short as 7 days when the KMS key is scheduled for deletion. During the waiting period, the scheduled deletion can be canceled and the KMS key will not be deleted. -
Description: Ensure that the s3 bucket used to store cloudtrail logs is not publicly accessible.
Explanation:
CloudTrail log files are stored in an S3 bucket. The bucket policy or access control list (ACL) applied to the S3 bucket that contains CloudTrail logs should prevent public access.
Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account’s use or configuration. -
Description: Ensure that the RDS instance has its default storage encrypted
Explanation:
Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.
Databases that hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access. With RDS encryption enabled, the data stored on the instance underlying storage, the automated backups, Read Replicas, and snapshots, become all encrypted. -
Description: Ensure that the RDS instances are not accessible publicly.
Explanation:
In order to reduce your attack surface as much as possible, you need to ensure that your RDS instances are only accessible by internal IPs. To do this, there are important configuration options during database setup that will allow you to define how much access is given into your databases. In this demo, AWS expert Mike Wise will walk through these configuration options and also take a look at security groups in association with VPCs and how that plays into database access control.
Unrestricted access to your RDS instance allows everyone on the internet to establish a connection with your database. This can lead to brute-force, DoS/DDoS, or SQL injection attacks. -
Description: Ensure that “Block Public Access” on a S3 bucket is enabled. S3 Block public access (bucket settings) prevents the accidental or malicious public exposure of data contained within the respective bucket(s)
Explanation:
Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However with an IAM principle with sufficient S3 permissions can enable public access at the bucket and/or object level.
While enabled, Block public access (bucket settings) prevents an individual bucket and its objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets and it’s objects in an account, from becoming publicly accessible.
Amazon S3 Block public access (bucket settings) prevents the accidental or malicious public exposure of data contained within the respective bucket(s).
Amazon S3 Block public access (account settings) prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.
Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.
When you apply Block Public Access settings to a -
Description: Ensure API Gateway has a WAF ACL attached
Explanation:
AWS recommends the account owners to leverage AWS WAF to protect your API Gateway API from common web exploits; such as SQL injection, Cross-Site Request Forgery (CSRF) and cross-site scripting (XSS) attacks. These could affect API availability and performance; compromise security; or consume excessive resources. -
Description: Ensure CloudFront distributions have logging enabled
Explanation:
AWS recommends the account owners to enable CloudFront distribution logging to track all viewer requests. This information can be extremely useful during security audits, or as input data for various analytics/reporting tools. -
Description: Ensure CloudFront distributions is integrated with WAF
Explanation:
AWS recommends the account owners to integrate CloudFront distribution with AWS WAF to help protect web applications from common exploits, such as SQL injection or cross-site scripting. -
Description: Ensure that cloudtrail logs are encrypted using AWS SSE-KMS while stored in S3 bucket.
Explanation:
AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies.
AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data and uses Hardware Security Modules (HSMs) to protect the security of encryption keys.
By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files.
It is recommended that CloudTrail be configured to use SSE-KMS.
Note:
Enabling server-side encryption encrypts the log files but not the digest files with SSE-KMS. Digest files are encrypted with Amazon S3-managed encryption keys (SSE-S3).
If you are using an existing S3 bucket with an S3 Bucket Key, CloudTrail must be allowed permission in the key policy to use the AWS KMS actions GenerateDataKeyand DescribeKey. If cloudtrail.amazonaws.com is not granted those permissions in the key policy, you cannot create or update a trail. -
Description: Amazon DynamoDB point-in-time recovery (PITR) provides automatic backups of your DynamoDB table data. When enabled, point-in-time recovery provides continuous backups until you explicitly turn it off. After you enable point-in-time recovery, you can restore to any point in time within EarliestRestorableDateTime and LatestRestorableDateTime.
Explanation:
Point-In-Time-Recovery (PITR) is an automatic continuous backup that lets you restore your DynamoDB table and secondary indexes, global and local, to any point in time during the past 35 days. This setting does not interfere with on-demand backups but instead acts as an additional defense layer.
Point-in-time recovery helps protect your DynamoDB tables from accidental write or delete operations. With point-in-time recovery, you don’t have to worry about creating, maintaining, or scheduling on-demand backups. For example, suppose that a test script writes accidentally to a production DynamoDB table. With point-in-time recovery, you can restore that table to any point in time during the last 35 days. DynamoDB maintains incremental backups of your table.
In addition, point-in-time operations don’t affect performance or API latencies. -
Description: Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling. Application Auto Scaling decreases the throughput when the workload decreases so that you don’t pay for unused provisioned capacity.
Explanation:
Many database workloads are cyclical in nature or are difficult to predict in advance. For example, consider a social networking app where most of the users are active during daytime hours. The database must be able to handle the daytime activity, but there’s no need for the same levels of throughput at night. Another example might be a new mobile gaming app that is experiencing rapid adoption. If the game becomes too popular, it could exceed the available database resources, resulting in slow performance and unhappy customers. These kinds of workloads often require manual intervention to scale database resources up or down in response to varying usage levels.
Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity.
Enable DynamoDB auto scaling to automatically manage throughput capacity or use On-Demand capacity mode.
With Application Auto Scaling, you create a scaling policy for a table or a global secondary index. The scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit settings for the table or index.
The scaling policy also contains a target utilization—the percentage of consumed provisioned throughput at a point in time. Application Auto Scaling uses a target trackingalgorithm to adjust the provisioned throughput of the table (or index) upward or downward in response to actual workloads, so that the actual capacity utilization remains at or near your target utilization.
You can set the auto scaling target utilization values between 20 and 90 percent for your read and write capacity. -
Description: AWS Backup enables you to centralize and automate data protection across AWS services. AWS Backup offers fully managed, policy-based service that further simplifies data protection at scale. You can schedule periodic or future backups by using backup plans which includes retention policies for your resources. AWS Backup creates the backups and deletes prior backups based on your retention schedule. AWS Backup removes the undifferentiated heavy lifting of manually making and deleting on-demand backups by automating the schedule and deletion for you.
Explanation:
Amazon DynamoDB offers two types of backup: point-in-time recovery (PITR) and on-demand. PITR provides continuous backups of your table and enables you to restore your table data to any point in time in the preceding 35 days. If you need to store backups of your data for longer than 35 days, you can use on-demand backup. On-demand provides you a fully consistent snapshot of your table data and stay around forever (even after the table is deleted). However, you may be used to deploying traditional backup solutions in your data centers and want to work with a centralized backup solution. This solution can schedule backups through jobs and handle tasks such as expiring and deleting older backups, monitoring the status of ongoing backups, verifying compliance, and finding and restoring backups, all from a central console.
You have the option of using AWS Backup, which provides you a more similar experience as your traditional backup solutions and simplifies your backup management by eliminating the need to use a different custom solution for each application you have to protect. -
Description: Verify that the default EBS volume of an EC2 instance is encrypted.
Explanation:
Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store(EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.
Default EBS volume encryption only applies to newly created EBS volumes. Existing EBS volumes are not converted automatically.
Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken. -
Description: Ensure credentials (passwords and access keys) unused for 45 days or greater are disabled
Explanation:
AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys.
It is recommended that all credentials that have been unused for 45 or more days be removed or deactivated.
Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned users to be used. -
Description: Ensure that iam access keys (along with its secret key) are changed every 90 days or less.
Explanation:
Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS.
The credentials are audited for authorized devices, users, and processes by ensuring IAM access keys are rotated as per organizational policy.
Changing the access keys on a regular schedule is a security best practice. It shortens the period an access key is active and reduces the business impact if the keys are compromised.
Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. -
Description: Ensure the password policy enforces that the user password requires at least one number
Explanation:
Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least one number, this increases security, especially from a brute force attack. -
Description: Ensure the password policy enforces that the user password requires at least one symbol
Explanation:
Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least one symbol, this increases security, especially from a brute force attack. -
Description: Ensure the password policy enforces that the user password requires at least one lowercase letter
Explanation:
Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least one lowercase letter, this increases security, especially from a brute force attack. -
Description: Ensure the password policy enforces that the user password requires at least one uppercase letter
Explanation:
Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least one uppercase letter, this increases security, especially from a brute force attack. -
Description: Ensure that all IAM user accounts that have console login privilege has Multi Factor Authentication (MFA) enabled and enforced.
Explanation:
Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS console, they will be prompted for their username and password as well as for an authentication code from their virtual or physical MFA device.
Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that creates a time-sensitive key and have knowledge of a credential. -
Description: Ensure the password policy enforces that the user password requires a minimum length of 14 or greater.
Explanation:
Password policies are used to enforce the creation and use of password complexity. Your IAM password policy should be set for passwords to require the inclusion of different character types. The password policy should enforce passwords contain at least a minimum length of 14 characters or greater, this increases security, especially from a brute force attack. -
Description: Ensure that the Customer managed keys in KMS are enabled for key rotation
Explanation:
AWS KMS enables customers to rotate the backing key, which is key material stored in AWS KMS and is tied to the key ID of the KMS key. It’s the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all previous backing keys so that decryption of encrypted data can take place transparently.
CIS recommends that you enable KMS key rotation. Rotating encryption keys helps reduce the potential impact of a compromised key because data encrypted with a new key can’t be accessed with a previous key that might have been exposed. -
Description: Ensure that the snapshots of RDS cluster snapshots of database are encrypted to prevent data leak.
Explanation:
This control checks whether RDS cluster snapshots are encrypted.
Encrypting data at rest reduces the risk that an unauthenticated user gets access to data that is stored on disk. Data in RDS snapshots should be encrypted at rest for an added layer of security. -
Description: Ensure that your RDS Aurora clusters are using Multi-AZ deployment configurations for high availability and automatic failover support fully managed by AWS.
Explanation:
This control checks whether high availability is enabled for your RDS DB instances.
RDS DB instances should be configured for multiple Availability Zones (AZs). This ensures the availability of the data stored. Multi-AZ deployments allow for automated failover if there is an issue with Availability Zone availability and during regular RDS maintenance. -
Description: Ensure that your Amazon RDS database instances have Log Exports feature enabled in order to publish database log events directly to AWS CloudWatch Logs.
Explanation:
his control checks whether the following logs of Amazon RDS are enabled and sent to CloudWatch Logs:
Oracle: (Alert, Audit, Trace, Listener)
PostgreSQL: (Postgresql, Upgrade)
MySQL: (Audit, Error, General, SlowQuery)
MariaDB: (Audit, Error, General, SlowQuery)
SQL Server: (Error, Agent)
Aurora: (Audit, Error, General, SlowQuery)
Aurora-MySQL: (Audit, Error, General, SlowQuery)
Aurora-PostgreSQL: (Postgresql, Upgrade).
RDS databases should have relevant logs enabled. Database logging provides detailed records of requests made to RDS. Database logs can assist with security and access audits and can help to diagnose availability issues.
The Log Exports feature supports the following log types:
Error log – collects diagnostic messages generated by the database engine, together with startup and shutdown times.
General query log – contains a record of all SQL statements received from clients, plus the client connect and disconnect times.
Slow query log – contains a record of SQL statements that took longer than expected to execute and examined more than a defined number of rows (both thresholds are configurable).
Audit log – records database activity on the instance for audit purposes. -
Description: Ensure IAM Database Authentication feature is enabled in order to use AWS Identity and Access Management (IAM) service to manage database access to your DB Instances.
Explanation:
This control checks whether an RDS DB instance has IAM database authentication enabled.
IAM database authentication allows authentication to database instances with an authentication token instead of a password. Network traffic to and from the database is encrypted using SSL. -
Description: Ensure IAM Database Authentication feature is enabled in order to use AWS Identity and Access Management (IAM) service to manage database access to your DB cluster.
Explanation:
This control checks whether an RDS DB cluster has IAM database authentication enabled.
IAM database authentication allows for password-free authentication to database instances. The authentication uses an authentication token. Network traffic to and from the database is encrypted using SSL.
With this feature enabled, you don’t have to use a password when you connect to your MySQL/PostgreSQL database instances, instead you use an authentication token. An authentication token is a unique string of characters with a lifetime of 15 minutes that AWS RDS generates on your request. IAM Database Authentication removes the need of storing user credentials within the database configuration, because authentication is managed externally using AWS IAM. -
Description: Ensure enhanced monitoring is enabled for your RDS DB instances.
Explanation:
In Amazon RDS, Enhanced Monitoring enables a more rapid response to performance changes in underlying infrastructure. These performance changes could result in a lack of availability of the data. Enhanced Monitoring provides real-time metrics of the operating system that your RDS DB instance runs on. An agent is installed on the instance. The agent can obtain metrics more accurately than is possible from the hypervisor layer.
Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. -
Description: Ensure that your Amazon Relational Database Service (RDS) clusters have Deletion Protection feature enabled in order to protect them from being accidentally deleted.
Explanation:
This control is intended for RDS DB clusters. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings are not useful, then you can suppress them.
Enabling cluster deletion protection is an additional layer of protection against accidental database deletion or deletion by an unauthorized entity.
When deletion protection is enabled, an RDS cluster cannot be deleted. Before a deletion request can succeed, deletion protection must be disabled -
Description: Ensure that the snapshots of RDS snapshots of database are encrypted to prevent data leak.
Explanation:
This control checks whether RDS DB snapshots are encrypted.
This control is intended for RDS DB instances. However, it can also generate findings for snapshots of Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings are not useful, then you can suppress them.
Encrypting data at rest reduces the risk that an unauthenticated user gets access to data that is stored on disk. Data in RDS snapshots should be encrypted at rest for an added layer of security. -
Description: Check whether the S3 bucket is encrypted with AWS Key Management Service (KMS) as default option.
Explanation:
AWS S3 supports server side encryption with AWS Key Management Service (SSE-KMS) to encrypt data at rest.
SSE-KMS provides additional benefits along with additional charges
KMS is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud.
KMS uses customer master keys (CMKs) to encrypt the S3 objects.
The master key is never made available
KMS enables you to centrally create encryption keys, define the policies that control how keys can be used
Allows audit of keys used to prove they are being used correctly, by inspecting logs in AWS CloudTrail
Allows keys to temporarily disabled and re-enabled
Allows keys to be rotated regularly
Security controls in AWS KMS can help meet encryption-related compliance requirements.
SSE-KMS enables separate permissions for the use of an envelope key (that is, a key that protects the data’s encryption key) that provides added protection against unauthorized access of the objects in S3.
SSE-KMS provides the option to create and manage encryption keys yourself, or use a default customer master key (CMK) that is unique to you, the service you’re using, and the region you’re working in.
Creating and Managing CMK gives more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect the data.
Data keys used to encrypt the data are also encrypted and stored alongside the data they protect and are unique to each object.
Process flow
An application or AWS service client requests an encryption key to encrypt data and passes a reference to a master key under the account.
Client requests are authenticated based on whether they have access to use the master key.
A new data encryption key is created, and a copy of it is encrypted under the master key.
Both the data key and encrypted data key are returned to the client.
Data key is used to encrypt customer data and then deleted as soon as is practical.
Encrypted data key is stored for later use and sent back to AWS KMS when the source data needs to be decrypted.
S3 Default Encryption – Setting the default encryption behavior for an S3 bucket so that all new objects are encrypted when they are stored in the bucket using AWS KMS keys stored in AWS KMS (SSE-KMS) enhances the security posture of Data at Rest. -
Description: Checks whether Amazon S3 bucket has lock enabled, by default. The rule is NON_COMPLIANT if the lock is not enabled.
Explanation:
Object Lock is an Amazon S3 feature that blocks object version deletion during a user-defined retention period, to enforce retention policies as an additional layer of data protection and/or for strict regulatory compliance.
The feature provides two ways to manage object retention: retention periods and legal holds. A retention period specifies a fixed time frame during which an S3 object remains locked, meaning that it can’t be overwritten or deleted. A legal hold implements the same protection as a retention period, but without an expiration date. Instead, a legal hold remains active until you explicitly remove it.
Ensure that your Amazon S3 buckets have Object Lock feature enabled in order to prevent the objects they store from being deleted.
Used in combination with versioning, which protects objects from being overwritten, AWS S3 Object Lock enables you to store your S3 objects in an immutable form, providing an additional layer of protection against object changes and deletion.
S3 Object Lock feature can also help you meet regulatory requirements within your organization when it comes to data protection. -
Description: Ensure that Network access list (NACL) of every VPC in the region and AWS account does not allow traffic through well known remote administration ports such as SSH to port 22 and RDP to port 3389.
Explanation:
The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389.
Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. -
Description: Ensure that your VPC Flow logs are enabled in order to capture network activity for auditing purposes.
Explanation:
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you create a flow log, you can retrieve and view its data in the chosen destination.
Flow logs can help you with a number of tasks, such as:
Diagnosing overly restrictive security group rules
Monitoring the traffic that is reaching your instance
Determining the direction of the traffic to and from the network interfaces
Flow log data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance.
While setting up the VPC flow log, setting filter to Reject will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to “All” can be very helpful in discovering existing traffic flows required for proper operation of an already running environment. -
Description: This control checks if an Amazon EventBridge custom event bus has a resource-based policy attached. This control fails if the custom event bus doesn’t have a resource-based policy.
Explanation:
By default, an EventBridge custom event bus doesn’t have a resource-based policy attached. This allows principals in the account to access the event bus. By attaching a resource-based policy to the event bus, you can limit access to the event bus to specified accounts, as well as intentionally grant access to entities in another account. -
Description: This control checks if event replication is enabled for an Amazon EventBridge global endpoint. The control fails if event replication isn’t enabled for a global endpoint.
Explanation:
Global endpoints help make your application Regional-fault tolerant. To start, you assign an Amazon Route 53 health check to the endpoint. When failover is initiated, the health check reports an “unhealthy” state. Within minutes of failover initiation, all custom events are routed to an event bus in the secondary Region and are processed by that event bus. When you use global endpoints, you can enable event replication. Event replication sends all custom events to the event buses in the primary and secondary Regions using managed rules. We recommend enabling event replication when setting up global endpoints. Event replication helps you verify that your global endpoints are configured correctly. Event replication is required to automatically recover from a failover event. If you don’t have event replication enabled, you’ll have to manually reset the Route 53 health check to “healthy” before events are rerouted back to the primary Region.
If you’re using custom event buses, you’ll need a custom even bus in each Region with the same name and in the same account for failover to work properly. Enabling event replication can increase your monthly cost. For information about pricing, see Amazon EventBridge pricing. -
Description: This control checks whether an Amazon EventBridge Bus has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the EventBridge bus doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the event bus isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
Explanation:
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag.
Don’t add personally identifiable information (PII) or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including AWS Billing. For more tagging best practices, see Tagging your AWS resources in the AWS General Reference. -
Description: This control checks whether an Amazon EventBridge rules has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the EventBridge rules doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the event rules isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored.
Explanation:
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag.
Don’t add personally identifiable information (PII) or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including AWS Billing. For more tagging best practices, see Tagging your AWS resources in the AWS General Reference. -
Description: aws-ensure-security-groups-do-not-allow-ingress-from-IPv6-::/0-to-remote-server-administration-ports
Explanation:
Description:
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389.
Rationale:
Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.
Impact:
When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule.
Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. We recommend that no security group allow unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17), or ALL (-1) protocols. Permitting public access to these ports increases resource attack surface and the risk of resource compromise.
This control checks whether an Amazon EC2 security group allows ingress from ::/0 to remote server administration ports (ports 22 and 3389). The control fails if the security group allows ingress from ::/0 to port 22 or 3389. -
Description: AWS Config should be enabled and use the service-linked role for resource recording
Explanation:
The AWS Config service performs configuration management of supported AWS resources in your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items, and any configuration changes within resources. Global resources are resources that are available in any Region.
The control is evaluated as follows:
If the current Region is set as your aggregation Region, the control produces PASSED findings only if AWS Identity and Access Management (IAM) global resources are recorded (if you have enabled controls that require them).
If the current Region is set as a linked Region, the control doesn’t evaluate whether IAM global resources are recorded.
If the current Region isn’t in your aggregator, or if cross-Region aggregation isn’t set up in your account, the control produces PASSED findings only if IAM global resources are recorded (if you have enabled controls that require them).
Control results aren’t impacted by whether you choose daily or continuous recording of changes in resource state in AWS Config. However, the results of this control can change when new controls are released if you have configured automatic enablement of new controls or have a central configuration policy that automatically enables new controls. In these cases, if you don’t record all resources, you must configure recording for resources that are associated with new controls in order to receive a PASSED finding.
Security Hub security checks work as intended only if you enable AWS Config in all Regions and configure resource recording for controls that require it.
AWS_CFG_01 requires that AWS Config is enabled in all Regions in which you use Security Hub.
Since Security Hub is a Regional service, the check performed for this control evaluates only the current Region for the account.
To allow security checks against IAM global resources in a Region, you must record IAM global resources in that Region. Regions that don’t have IAM global resources recorded will receive a default PASSED finding for controls that check IAM global resources. Since IAM global resources are identical across AWS Regions, we recommend that you record IAM global resources in only the home Region (if cross-Region aggregation is enabled in your account). IAM resources will be recorded only in the Region in which global resource recording is turned on.
The IAM globally recorded resource types that AWS Config supports are IAM users, groups, roles, and customer managed policies. You can consider disabling Security Hub controls that check these resource types in Regions where global resource recording is turned off.
This control checks whether AWS Config is enabled in your account in the current AWS Region, records all resources that correspond to controls that are enabled in the current Region, and uses the service-linked AWS Config role. The name of the service-linked role is AWSServiceRoleForConfig. If you don’t use the service-linked role and don’t set the includeConfigServiceLinkedRoleCheck parameter to false, the control fails because other roles might not have the necessary permissions for AWS Config to accurately record your resources. -
Description: IAM identities like users, groups and roles should not have the AWSCloudShellFullAccess policy attached
Explanation:
AWS CloudShell provides a convenient way to run CLI commands against AWS services. The AWS managed policy AWSCloudShellFullAccess provides full access to CloudShell, which allows file upload and download capability between a user’s local system and the CloudShell environment. Within the CloudShell environment, a user has sudo permissions, and can access the internet. As a result, atttaching this managed policy to an IAM identity gives them the ability to install file transfer software and move data from CloudShell to external internet servers. We recommend following the principle of least privilege and attaching narrower permissions to your IAM identities.
This control checks whether an IAM identity (user, role, or group) has the AWS managed policy AWSCloudShellFullAccess attached. The control fails if an IAM identity has the AWSCloudShellFullAccess policy attached. -
Description: Macie-automated-sensitive-data-discovery-should-be-enabled
Explanation:
Macie automates discovery and reporting of sensitive data, such as personally identifiable information (PII), in Amazon Simple Storage Service (Amazon S3) buckets. With automated sensitive data discovery, Macie continually evaluates your bucket inventory and uses sampling techniques to identify and select representative S3 objects from your buckets. Macie then analyzes the selected objects, inspecting them for sensitive data. As the analyses progress, Macie updates statistics, inventory data, and other information that it provides about your S3 data. Macie also generates findings to report sensitive data that it finds.
This control checks whether automated sensitive data discovery is enabled for an Amazon Macie administrator account. The control fails if automated sensitive data discovery isn’t enabled for a Macie administrator account. This control applies only to administrator accounts. -
Description: Amazon-Macie-should-be-enabled
Explanation:
Amazon Macie discovers sensitive data using machine learning and pattern matching, provides visibility into data security risks, and enables automated protection against those risks. Macie automatically and continually evaluates your Amazon Simple Storage Service (Amazon S3) buckets for security and access control, and generates findings to notify you of potential issues with the security or privacy of your Amazon S3 data. Macie also automates discovery and reporting of sensitive data, such as personally identifiable information (PII), to provide you with a better understanding of the data that you store in Amazon S3.
This control checks whether Amazon Macie is enabled for an account. The control fails if Macie isn’t enabled for the account. -
Description: Security Hub collects security data from across AWS accounts, services, and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie.
Explanation:
AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices enabling you to quickly assess the security posture across your AWS accounts.
It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled.
Cloudcatcher evaluates AWS Security Hub configuration per region and fails if it’s not enabled. -
Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don’t log data events and so it is recommended to enable Object-level logging for S3 buckets.
Explanation:
Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events.
Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. -
Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don’t log data events and so it is recommended to enable Object-level logging for S3 buckets.
Explanation:
Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events.
Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. -
Description: CloudTrail trails should be tagged
Explanation:
A tag is a label that you assign to an AWS resource, and it consists of a key and an optional value. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Tags can help you identify, organize, search for, and filter resources. Tagging also helps you track accountable resource owners for actions and notifications. When you use tagging, you can implement attribute-based access control (ABAC) as an authorization strategy, which defines permissions based on tags. You can attach tags to IAM entities (users or roles) and to AWS resources. You can create a single ABAC policy or a separate set of policies for your IAM principals. You can design these ABAC policies to allow operations when the principal’s tag matches the resource tag.
This control checks whether an AWS CloudTrail trail has tags with the specific keys defined in the parameter requiredTagKeys. The control fails if the trail doesn’t have any tag keys or if it doesn’t have all the keys specified in the parameter requiredTagKeys. If the parameter requiredTagKeys isn’t provided, the control only checks for the existence of a tag key and fails if the trail isn’t tagged with any key. System tags, which are automatically applied and begin with aws:, are ignored. -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: N/A
Explanation:
N/A -
Description: Ensure that your Amazon EC2 instances are using the appropriate tenancy model, i.e. Multi-Tenant Hardware (shared) or Single-Tenant Hardware (dedicated) in order to comply with your organization regulatory requirements
Explanation:
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Tenancy defines how EC2 instances are distributed across physical hardware and affects pricing. -
Description: Ensure unused AWS EBS Volumes are deleted
Explanation:
Identify unused (unattached) Amazon Elastic Block Store (EBS) volumes available within your AWS cloud account and delete these volumes in order to lower the cost of your AWS bill and reduce the risk of confidential and sensitive data leaks.