patternkubernetesMinor
K8s cluster in peered VPC can't reach RDS cluster
Viewed 0 times
canreachpeeredk8svpcrdscluster
Problem
I have two K8s clusters in the same VPC that I want to connect to an MySQL
Aurora RDS cluster. One of the clusters can reach the RDS cluster just fine.
The other, however, cannot. I'll call these
I have a security group allowing traffic to the cluster:
Type
Protocol
Port range
Source
Description
MYSQL/Aurora
TCP
This rule works
MYSQL/Aurora
TCP
This one does not
Both
security groups that AWS creates when making an EKS cluster and they are both
on the same K8s version (1.18). The one exception is that
has an extra security group for the load balancer service since it's hosting a
web service and the other one (
have outbound traffic rules that can reach
I have zero firewalls, firewall policies and network firewall rule groups set
up in the region the clusters are hosted in.
This is the Terraform configuration I have set up for peering from my VPCs
to the default VPC.
```
resource "aws_vpc_peering_connection" "default_to_environment" {
count = local.num_environment_vpcs
peer_vpc_id = data.aws_vpc.environment[count.index].id
auto_accept = true
vpc_id = var.vpc_default_id
tags = {
Name = "Peer from default to ${data.aws_vpc.environment[count.index].tags["Name"]}"
tf_module = "vpc"
}
}
resource "aws_route_table" "r" {
vpc_id = var.vpc_default_id
# route = local.route_table_routes
route {
cidr_block = "0.0.0.0/0"
gateway_id = "igw-"
}
dynamic route {
for_each = [for r in local.route_table_routes : {
cidr_block = r.cidr_block
vpc_peering_connection_id = r.vpc_peering_connection_id
}]
content {
cidr_block = route.value.cidr_block
vpc_peering_connection_id = route.value.vpc_peer
Aurora RDS cluster. One of the clusters can reach the RDS cluster just fine.
The other, however, cannot. I'll call these
eks-cluster-working andeks-cluster-broken.I have a security group allowing traffic to the cluster:
Type
Protocol
Port range
Source
Description
MYSQL/Aurora
TCP
3306sg-1 (eks-cluster-sg-working)This rule works
MYSQL/Aurora
TCP
3306sg-2 (eks-cluster-sg-broken)This one does not
Both
eks-cluster-working and eks-cluster-broken have the same defaultsecurity groups that AWS creates when making an EKS cluster and they are both
on the same K8s version (1.18). The one exception is that
eks-cluster-workinghas an extra security group for the load balancer service since it's hosting a
web service and the other one (
eks-cluster-broken) is not. Both clustershave outbound traffic rules that can reach
0.0.0.0/0.I have zero firewalls, firewall policies and network firewall rule groups set
up in the region the clusters are hosted in.
This is the Terraform configuration I have set up for peering from my VPCs
to the default VPC.
```
resource "aws_vpc_peering_connection" "default_to_environment" {
count = local.num_environment_vpcs
peer_vpc_id = data.aws_vpc.environment[count.index].id
auto_accept = true
vpc_id = var.vpc_default_id
tags = {
Name = "Peer from default to ${data.aws_vpc.environment[count.index].tags["Name"]}"
tf_module = "vpc"
}
}
resource "aws_route_table" "r" {
vpc_id = var.vpc_default_id
# route = local.route_table_routes
route {
cidr_block = "0.0.0.0/0"
gateway_id = "igw-"
}
dynamic route {
for_each = [for r in local.route_table_routes : {
cidr_block = r.cidr_block
vpc_peering_connection_id = r.vpc_peering_connection_id
}]
content {
cidr_block = route.value.cidr_block
vpc_peering_connection_id = route.value.vpc_peer
Solution
For my Peering Connection, I went to:
"Actions" -> "Edit DNS Settings" -> "Allow accepter VPC (
and checked it. (It was unchecked before). I had to wait a little while before this took effect.
"Actions" -> "Edit DNS Settings" -> "Allow accepter VPC (
vpc-) to resolve DNS of requester VPC (vpc-``) hosts to private IP"and checked it. (It was unchecked before). I had to wait a little while before this took effect.
Context
StackExchange DevOps Q#12979, answer score: 4
Revisions (0)
No revisions yet.