Researchers who work in containers sometimes need to expose ports to access the container from remote. Frequent examples are:
- Using a Jupyter notebook that runs within the container
- Using PyCharm to run python commands remotely.
When using docker, the way researchers expose ports is by declaring them when starting the container.
Run:AI is based on Kubernetes. Kubernetes offers abstraction of the container's location. This complicates the exposure of ports. Kubernetes offers a number of alternative ways to expose ports:
- NodePort - Exposes the Service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> regardless of which node the container actually resides.
- LoadBalancer - Useful for cloud environments. Exposes the Service externally using a cloud provider’s load balancer.
- Ingress - Allows access to Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services. More information about ingress can be found here.
- Port Forwarding - Simple port forwarding allows access to container via localhost:<Port>
See https://kubernetes.io/docs/concepts/services-networking/service/ for further details
As an Administrator, you will need to choose a method and train researcher on it usage.
From all 4 options, the ingress method requires an in-cluster configuration during installation or upgrade of Run:AI. Specifically, you must provide the ip address range with which users can connect to services inside the container.
After submitting a job through the Run:AI CLI, the researcher will run runai list and receive the URL to connect to the service (see picture at the end of this document)
To configure the ip range list, during the cluster creation (or update) run:
helm upgrade -i runai ... \ ,localLoadBalancer.ipRangeFrom=10.0.2.1,localLoadBalancer.ipRangeTo=10.0.2.20