Unable to connect to RDS instance from outside VPC (ERROR 2003 (HY000) Can't connect to MySQL Server)

Solution 1:

For an RDS instance in VPC to "publicly" (Internet) accessible, all of the subnets it is attached to must be "public" -- as opposed to "private" -- subnets of the VPC.

A public subnet is essentially defined as a subnet that has the Internet Gateway object (igw-xxxxxxxx) as its route to "the Internet," or at least to any Internet destinations you need to access. Typically, this is a destination address of 0.0.0.0/0. Public subnets must be used for instances (including RDS) that will have an associated public IP address, and should not be used for instances that will not have public IP addresses, since private addresses do not work across the Internet without translation.

A private subnet, by contrast, has its routing table configured to reach Internet destinations via another EC2 instance, typically a NAT instance. This shows in the VPC route table associated with that subnet as i-xxxxxxxx, rather than "igw." That machine (which, itself, will actually be on a different subnet than the ones for which it acts as a route destination) serves as a translator, allowing the private-IP-only instances to transparently make outbound Internet requests using the NAT machine's public IP for their Internet needs. Instances with a public IP address cannot interact properly with the Internet if attached to a private subnet.

In the specific case, here, the subnets associated with the RDS instance were not really configured as something that could be simply classified as either a private or public subnet, because the subnet had no default route at all. Adding a default route through the "igw" object, or, as OP did, adding a static route to the Internet IP address where connectivity was needed, into the VPC route table for the subnets fixes the connectivity issue.

However... If you experience a similar issue, you can't simply change the route table or build new route tables and associate the subnets with them, unless you have nothing else already working correctly on the subnets, because the change could reasonably be expected to break existing connectivity. The correct course, in that case, would be to provision the instances on different subnets with the correct route table entries in place.

When setting up a VPC, it's ideal to clearly define the subnet roles and fully provision then with the necessary routes when the VPC is first commissioned. It's also important to remember that the entire VPC "LAN" is a software-defined network. Unlike in a physical network, where the router can become a bottleneck and it's often sensible to place machines with heavy traffic among them on the same subnet... traffic crossing subnets has no performance disadvantage on VPC. Machines should be placed on subnets that are appropriate for the machine's IP addressing needs -- public address, public subnet; no public address, private subnet.

More discussion of the logistics of private/public subnets in VPC can be found in Why Do We Need Private Subnet in VPC (at Stack Overflow).

Solution 2:

This already has a great answer, but AWS does create a public subnet for you when you choose the "publicly accessible" option. Rather, for me the problem was the default VPC security group. I was looking at the Network ACL rules - not Security Group. (Choosing the "publicly accessible" option in RDS adds the security group but does not automatically add the inbound rules.)

Click RDS and identify and click on the security group. Then under "inbound rules" add port 3306 and add your connecting IPV4 address, x.x.x.x/32 (or 0.0.0.0/32 - if you want the entire Internet to connect - but be careful with that).