Categories
OpenShift

Back to Basics: OpenShift Projects

In between playing (cough) with the new technologies that are emerging I have a day job involved with enthusing to people about using OpenShift, and other Open Source technologies via Red Hat. In the last week or so I’ve dealt with three customers who have had some little issues that were solvable just by playing about with the ‘Project’ object in OpenShift, so I thought I’d whip up a quick and concise (and hopefully fun) little blog post on what these little objects are and the things you can do with them from the tips and tricks perspective.

So, a Project is basically, and I use that term carefully, a Kubernetes ‘namespace’. And to wind it back to simplicity, a Kubernetes namespace is a bucket; a grouping of objects that have the same ownership applied to them. Basically a way to marshal resources (and I hesitate to use the word ‘label’ here).

You can think of a Project in OpenShift as a namespace++. Again, put simply, OpenShift maintains an object type of Project that represents extensions on a namespace; in reality what identifies an OpenShift Project, other than the type definition, is a set of ‘immutable’ annotations. And yes, I put ‘immutable’ in inverted commas as will become apparent further on.

So, if you look at this little extract of the yaml for a project in OpenShift you will see a set of annotations that were added by OpenShift upon project creation:

apiVersion: project.openshift.io/v1
kind: Project
metadata:
  annotations:
    openshift.io/description: ""
    openshift.io/display-name: ""
    openshift.io/requester: opentlc-mgr
    openshift.io/sa.scc.mcs: s0:c26,c5
    openshift.io/sa.scc.supplemental-groups: 1000660000/10000
    openshift.io/sa.scc.uid-range: 1000660000/10000

Not shown but the Project also has a name field, which must be unique across the Cluster.

Now these annotations are interesting and show a superb little security feature built into OpenShift; the description and display-name are just labels for the Ux. The requester is the user that created the Project in the first place.

The sa.scc fields are even cooler but I’ll get to those in a minute. For now I’m going to try and edit the project and change the creator thus:

#
# projects.project.openshift.io "rhuki-sandbox" was not valid:
# * metadata.annotations[openshift.io/requester]: Invalid value: "notme": field is immutable, try updating the namespace
#
apiVersion: project.openshift.io/v1
kind: Project
metadata:
  annotations:
    openshift.io/description: ""
    openshift.io/display-name: ""
    openshift.io/requester: notme
    openshift.io/sa.scc.mcs: s0:c26,c5
    openshift.io/sa.scc.supplemental-groups: 1000660000/10000
    openshift.io/sa.scc.uid-range: 1000660000/10000

And that’s what happens when I try to save it. The requester field is immutable.

With the sa.scc fields they are more critical to the Project; I’ll give you a great real-world example of how I worked around the immutability of the object to fix a problem and it will make sense – the sa.scc.mcs refers to the SELinux multi-category security labels that are applied to all objects within the container.

A quick mention of this; SELinux is that thing 97% of system administrators immediately turn off (too many people know what setenforce 0 does 🙂 ). OpenShift applies it rigorously to the file systems it generates for a container. Using the label generated for the project the container is bound by this, as shown below:

This is a Pod running within the sandbox project; note that the UID and GID match the range defined in the sa.scc.(x) annotations, and that every file in the directory listed (using ls -alZ) has the SELinux MCS value from the sa.scc.mcs applied.

So, my real world case. I had a customer who was attaching external storage to their cluster using NFS. Not only that, but an NFS system running off of a Windows server (don’t ask).

This server served filesystems that were assigned to a given UID. The whole file structure was only readable by, say, user 1002. When they attached this storage into cluster and exposed it as a Persistent Volume the Pods couldn’t access the files; the Project auto-assigns the UID and GID and it’s immutable.

So they were stuck. Kinda.

See, when I say OpenShift Projects are extensions on Namespaces that should have given you a clue…..

The CLI provided for OpenShift is called oc and is built upon the standard Kubernetes CLI, kubectl. An OpenShift project is a superset-object of Namespace. So it turns out you can use kubectl to edit the namespace object that backs the Project object. And change the immutable fields.

So what I did was use kubectl to edit the Namespace object and change the UID and GID ranges thus:

Then when I do an ‘oc describe project sandbox’ I get:

uther@ilawson-mac ~ % oc describe project sandbox
Name:			sandbox
Created:		4 weeks ago
Labels:			kubernetes.io/metadata.name=sandbox
Annotations:		openshift.io/description=
			openshift.io/display-name=
			openshift.io/requester=uther
			openshift.io/sa.scc.mcs=s0:c26,c20
			openshift.io/sa.scc.supplemental-groups=1001/1
			openshift.io/sa.scc.uid-range=1001/1
Display Name:		<none>
Description:		<none>
Status:			Active
Node Selector:		<none>
Quota:			<none>
Resource limits:	<none>

And when I delete the Pod in the Project it is recreated thus:

Yeah, it’s a little hack but then the Pod could access the file system exposed. Normally it’s fine to just let OpenShift control the UID/GID.

By utherp0

Amateur thinker, professional developer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s