Restrict Outbound Connection on OpenShift Platform from IBM Cloud (ROKS)

Ujjwal Chakraborty
5 min readSep 26, 2023

--

This doc is going to talk about how we can restrict outbound connection for an Openshift Platform running on IBM Cloud and finally how we can use a forward proxy a gateway for communication. Though this particularly focus on IBM Cloud but this concept can be applied to any other public platform like AWS , Azure etc.

Our goal is to –

1. Setup an Openshift cluster in IBM Cloud

2. Restrict the outbound connection

3. Install a forward proxy

4. Enable outbound connection through only the forward proxy

5. Configure Openshift cluster to use the proxy any outbound communication.

Prerequisites –

1. Valid access for IBM Cloud , user must have Admin access for these resources :VPC , Subnets , Virtual Server, Public Gateways , Access control lists and finally access for creating Openshift Cluster.

2. User should be aware of concept of the above resources and it’s usages. More details please follow the cloud docs https://cloud.ibm.com/docs

In high-level our setup would look something like this –

As the above picture depicts, you can see the OCP platform outbound connection marked as RED whereas from VIS(Virtual Server Instance) the outbound connection marked as GREEN. Now let us see how we can achieve this setup.

1. Setup a VPC and Subnet

First thing , we will focus creating an own VPC (if not there yet) and then associate a subnet with it.

Follow the below link to create vpc either from UI or CLI

https://cloud.ibm.com/docs/vpc?topic=vpc-creating-vpc-resources-with-cli-and-api

While creating the VPC you also need to create or use an existing subnet. The subnet is consisting of the list of assigned IP ranges for the VPC. One can bring its own subnet / ipv4 address while creating the VPC ( ref https://cloud.ibm.com/docs/vpc?topic=vpc-configuring-address-prefixes&interface=cli)

Make sure when you create a new subnet, you must attach the public Gateway with it. You can find the toggle option at the end of the subnet configuration page.

Once the VPC and subnet is created & associated, you can verify going to vpc section and choose by region under which the vpc was created. It should show the subnet and associated public gateway

2. Create ROKS (Redhat Openshift Cluster) in the VPC

This can be either done from CLI and UI. Ref https://cloud.ibm.com/docs/openshift?topic=openshift-vpc_rh_tutorial

3. Managing VPC ACL (Access Control List) to restrict outbound connection

By default when we create a subnet it also creates a ACL (Access Control List) which control overall communication in and out of the VPC. The ACL by default has any-any rule for inbound and outbound. To find out the ACL related to your subnet , go to subnet the click on the Access Control List.

Once you click on the ACL link , you would able to see the list of the inbound and outbound rules. Here we will have to modify the outbound rules to restrict the outbound connection.

i. Remove the “Any IP” to “Any IP” rule from outbound Rules section.

ii. Only add the required minimum rules as mentioned here https://cloud.ibm.com/docs/openshift?topic=openshift-vpc-acls , refer the section (6) Outbound Rules section.

iii. Verify this configuration , by login to openshift platform to any running pod/container -> make outbound curl to any public website should be failed.

iv. Please note — if you find the openshift cluster ingress healthcheck is failing , you should get the ingress ip and add that back to the outbound rule , where ingress ip would be source and destination is “any” , protocol as “tcp”. Otherwise, you should look at the related reason as mentioned here https://cloud.ibm.com/docs/openshift?topic=openshift-ingress-status

If the pt(3) successfully validated , that means you are successfully able to cutdown the outbound traffic from your vpc , so from your openshift cluster.

4. Setup a forward proxy

As a next step , we would have to setup forward proxy that can communicate to the public services. Our aim is to use the forward proxy as gateway to reach out to the internet. So, it is obvious that we have to allow the traffic from the forward proxy host to the internet through the ACL outbound rule , which we will configure in the later step.

In our usecase we are using Squid Proxy which we will setup in a linux distribution.

First create a VIS (virtual Server Instance) inside the same vpc. The VIS can be created under compute node section. While creating the VIS , make sure select the VPC created earlier , under “Networking” section. More details https://cloud.ibm.com/docs/vpc?topic=vpc-about-advanced-virtual-servers

Next , get the “Reserved IP” for VIS and add it to the ACL outbound rule as we have mentioned in the earlier section.

To add the ACL rule for VIS :

Click Create Outbound rules -> “Allow” -> Protocol “All” -> Source “Give the Reserved IP of VIS” -> Destination “Any” -> Create.

Next , make sure you can access the VIS from your workstation using ssh or via IBM cloud shell — you are good to configure the squid proxy. The step by step instructions on how to setup the squid proxy can be found here https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/configuring-the-squid-caching-proxy-server .

Once the squid proxy is setup , you can verify login to openshift platform any running pod/container -> make outbound curl to any public website should be working now when you use the “curl -x” flag to pass the forward proxy which is the newly setup squid proxy server and port.

Eg. “curl -kv -x http://proxy-server: proxy-port http://www.google.com

Additionally, you can setup the authentication mechanism for the squid proxy as needed or enable TLS.

5. Use the forward proxy in openshift cluster

Openshift by default support the cluster wide proxy option , which can be configured using “proxy” cr. Ref https://docs.openshift.com/container-platform/4.9/networking/enable-cluster-wide-proxy.html

But , this is not injected by default to the work-loads. So , application needs to have a mechanism to inject the proxy env variables so that it can use for underlying communication to talk to any public endpoints as per the requirement.

That’s all for this topic , thanks for reading.

--

--

Responses (1)