DO425
Last update: Tue Jan 14 23:15:49 UTC 2020 by @luckylittle
Objectives
Understand, identify, and work with containerization features
Deploy a preconfigured application and identify crucial features such as namespaces, SELinux labels, and cgroups
Deploy a preconfigured application with security context constraint capabilities and view the application’s capability set
Configure security context constraints
Use trusted registries
Load images into a registry
Query images in a registry
Work with trusted container images
Identify a trusted container image
Sign images
View signed images
Scan images
Load signed images into a registry
Build secure container images
Perform simple S2I builds
Implement S2I build hooks
Automate builds using Jenkins
Automate scanning and code validations as part of the build process
Control access to OpenShift Container Platform clusters
Configure users with different permission levels, access, and bindings
Configure OpenShift Container Platform to use Red Hat Identity Management services (IdM) for authentication
Query users and groups in IdM
Log into OpenShift Container Platform using an IdM managed account
Configure single sign-on (SSO)
Install SSO authentication
Configure OpenShift Container Platform to use SSO
Integrate web applications with SSO
Automate policy-based deployments
Configure policies to control the use of images and registries
Use secrets to provide access to external registries
Automatically pull and use images from a registry
Use triggers to verify that automated deployments work
Manage orchestration
Restrict nodes on which containers may run
Use quotas to limit resource utilization
Use secrets to automate access to resources
Configure network isolation
Create software-defined networks (SDN)
Associate containers and projects with SDNs
Configure and manage secure container storage
Configure and secure file-based container storage
Configure and secure block-based container storage
1. Describing Host Security Technologies
INTRODUCING THE RHEL AND CRI-O CONTAINER TOOLS
invoke the
crictllocally from an OpenShift master or node, not remotely likeoccrictl= tool to interface with the CRI-O container engine from Kubernetes
skopeo= tool to manage container images stored in the local file system and in container registries
podman= tool to start and manage standalone containers on OCI-compliant container engines (podman build=buildah)
buildah= tool to build container images
INSPECTING THE LINUX NAMESPACES
unshare= command to create new namespaces
lsns= command to list all the namespaces on the system
nsenter= command to run a program in an existing namespace, if you do not provide a command as argument,nsenterruns/bin/bash
runc= Open Container Initiative runtime
When you create a pod, OpenShift runs the 'pod' process to create these namespaces and place the container processes in them. This means that all containers in a pod share the same network, the same System V IPC objects, and have the same host name.
How to find PID in OpenShift?
SECURING CONTAINERS WITH SELINUX
With OpenShift, containers processes always get the
container_tcontext type when they start, and the files and directories the containers need to access on the host system gets thecontainer_file_tcontext type.MCS (Multi Category Security) is a SELinux feature that solves issue of containers protect themselves from other container by taking advantage of the level part of the context:
system_u:system_r:container_t:s0:c4,c9(s0=sensitives, c4,c9=categories). Sensitivity is not used, but categories are. Category values are betweenc0andc1023each. When a container starts, the system assigns two random categories to the processes in the container. But OpenShift behaves differently when assigning the SELinux categories - OpenShift allocates the two random categories at the project level. Therefore, and by default, all pods and containers in a project get the same category pair. This is useful when multiple containers in the project need to access the same shared volume.
MANAGING RESOURCES WITH CGROUPS
LAB 1.1
LISTING AVAILABLE LINUX CAPABILITIES & LIMITING THEM
MANAGING CAPABILITIES IN CONTAINERS
Podman possess a set of two options for managing capabilities:
--cap-addand--cap-dropDefault common capabilities by podman:
cap_chown,cap_mknod,cap_dac_override,cap_audit_write,cap_setfcap,cap_fsetid
Avoid CAP_SYS_ADMIN, which is too broad.
INTRODUCING SECURE COMPUTING MODE seccomp
If a process attempts to perform a system call that it is not allowed to performed, the process is terminated according the policy that is in place
Two modes:
A) allows a process to make only four system calls:
read(),write(),_exit(), andsigreturn(). With this seccomp mode enabled, processes cannot fork new threads, nor monitor network activity.B) seccomp-bpf Kernel extension allows generic system call filtering. For example, you can define a rule that allows a process to only access certain files. seccomp allows you to define a profile that contains a set of filters, which are applied to every system call that submitted from a process to the kernel.
RESTRICTING PROCESSES WITH seccomp
To enable seccomp protection, a parent process sets a profile right before forking a child process
Podman allows you to use the
--security-optoption to attach a security profile to your containerTwo annotations:
A)
seccomp.security.alpha.kubernetes.io/podB)
container.seccomp.security.alpha.kubernetes.io/<container_name>
An example of the custom_policy.json:
Attach this policy to a container using Podman and test it:
Another example:
Default seccomp profile
Provides a sane default for running containers with seccomp and disables around 44 system calls out of 300+. It is moderately protective while providing wide application compatibility. The default Docker profile can be found:
less /etc/docker/seccomp.json - defaultAction is deny
In effect, the profile is a whitelist which denies access to system calls by default, then whitelists specific system calls. The profile works by defining a defaultAction of SCMP_ACT_ERRNO and overriding that action only for specific system calls. The effect of SCMP_ACT_ERRNO is to cause a Permission Denied error. Next, the profile defines a specific list of system calls which are fully allowed, because their action is overridden to be SCMP_ACT_ALLOW. Finally, some specific rules are for individual system calls such as personality, and others, to allow variants of those system calls with specific arguments.
strace -cf ping 172.25.250.13 shows table with used system calls
Limitations:
Filtering policies apply to the entire pod or container, and not only to the application running inside a container. Consider also the system calls that the container runtime makes when starting the container.
OpenShift does not support yet policy precedence. If developer defines custom profile for their containers, it overrides the default profile by OpenShift.
Identifying System Calls that Processes Invoke:
Creating a container with the
SYS_PTRACEcapability. This capability allows a container to trace arbitrary processes using ptraceInvoking the strace command from inside the container.
Locating the commands that are invoked.
Updating the security policy to include all relevant system calls.
Instantiating a new container with the updated security profile.
LAB 1.2
2. Establishing Trusted Container Images
Red Hat OpenShift Platform v3.11 points to the new registry.redhat.io registry by default
QUAY
Configuring Jenkins to support Quay integration
To use Quay as the container image registry, some requirements must be addressed: The Quay repository must use a valid SSL certificate to communicate with OpenShift and Jenkins. If you are using self-signed certificates, each node from OpenShift must have the self-signed certificate in the Quay's certificate directory (/etc/docker/certs.d/<quay-URI>). Furthermore, the Jenkins slave container must have the certificate to sign the container image as well as the skopeo command line to push changes to the registry.
LAB 2.1
Using Images Annotations for Security
Each annotation supports these fields:
name (provider display name),
timestamp of the scan,
description (not required),
reference,
scannerVersion,
compliant (yes x no),
summary (label [critical/important], data, severityIndex, reference)
Example:
The annotation images.openshift.io/deny-execution=true is added by a security scanner, to define a policy that admission plugin prevents images from being retrieved if they are not marked as compliant: vim /etc/origin/master/master-config.yml:
master-restart api master-restart controllers
Image signing and verification
Signer server is the host responsible for generating the signature that embeds the image manifest digest and publish the signature to the signatures server
On the server, the
/etc/containers/registries.d/registry.yamllocation contains configuration files that specify where signatures are stored after their generation and where to download signature for each registry - e.g.registry.yaml:
Configure clients (nodes pulling images) - you can have different policies for different nodes:
Allow only images from the registry.lab.example.com server. All other images are rejected:
LAB 2.2
Inspecting image layers
Introducing Clair
Clair only analyses images based on Red Hat Enterprise Linux, Alpine Linux, Debian, Oracle Linux, and Ubuntu because it only retrieves the vulnerabilities from these system vendors or projects
Clair also limits its scan to the distribution packages and does NOT check vulnerabilities in your application code, or libraries or artifacts retrieved from other sources
LAB 2.3
FINAL LAB 2
3. Implementing Security in the Build Process
Implementing Image Change Triggers
LAB 3.1
Ensure Jenkins is deployed
To create Jenkins pipeline, you have to create this resource:
Integration point in a Jenkins Pipeline - slave hosts are started as containers using a Jenkins Kubernetes plugin:
<name>is the name of the slave used in Jenkinsfile (agent - label)<image>is the container image to start the build process, there are many types of Jenkins slaves
Example of the agent definition:
LAB 3.2
FINAL LAB 3
4. Managing User Access Control
RBAC resources:
Users = can make requests to OpenShift API
Service Accounts = used for delegating certain tasks to OpenShift
Groups
Roles = collections of rules
Rules = define verbs that users/groups can use with a given resource
Security Context Constraints = control the actions pod/container can perform
Role bindings = roles to users/groups
Two levels:
Cluster-wide RBAC - applicable across all projects
Local RBAC - apply to a given project
The following excerpt shows how to include a SCC to a role. This gives privileges to the user or the group that uses this role to access the restricted-scc security context constraint:
A good list with all resources and verbs: oc describe clusterrole.rbac
Some examples of the role definition:
This "Role" is allowed to read the resource "Pods" in the core API group:
This "Role" is allowed to read and write the "Deployments" in both the "extensions" and "apps" API groups:
This "Role" is allowed to read "Pods" and read/write "Jobs" resources in API groups:
This "Role" is allowed to read a "ConfigMap" named "my-config" (must be bound with a "RoleBinding" to limit to a single "ConfigMap" in a single namespace):
This "ClusterRole" is allowed to read the resource "nodes" in the core group (because a Node is cluster-scoped, this must be bound with a "ClusterRoleBinding" to be effective):
This "ClusterRole" is allowed to "GET" and "POST" requests to the non-resource endpoint "/healthz" and all subpaths (must be in the "ClusterRole" bound with a "ClusterRoleBinding" to be effective):
Determining User Privileges
INTRODUCING TOKENS
LAB 4.1
CONFIGURING AN OPENSHIFT IDENTITY PROVIDER FOR RED HAT IDENTITY MANAGEMENT (IdM)
OpenShift masters can be configured with different identity providers that allow an OpenShift cluster to delegate user authentication and group membership management to different identity stores.
To configure an OpenShift
LDAPPasswordIdentityProvideridentity provider to integrate with an IdM domain, you need the following information about your IdM domain and servers:DNS domain name of your IdM domain (
organization.example.com)FQDN of one of your IdM servers (
ldap1.organization.example.com)LDAP user name with read access of the entire user accounts tree (
uid=admin,cn=users,cn=accounts,dc=organization,dc=example,dc=com)LDAP container of the user accounts tree (
cn=users,cn=accounts,dc=organization,dc=example,dc=com)public key TLS certificate of your IdM domain (
/etc/ipa/ca.crtfile in any server or client of your IdM domain)
Example stanza under the
identityProvidersattribute on the OpenShift master configuration file/etc/origin/master/master-config.yaml:
SYNCHRONIZING GROUPS BETWEEN OPENSHIFT AND IdM
Configuring LDAP Synchronization Connection Parameters (similar to LDAPPasswordIdentityProvider):
MANAGING OPENSHIFT USERS AND IDENTITIES
You may need to delete the identity resource for a user if the same user name exists on different identity providers.
OpenShift retains identities for deleted users, and these identities may prevent a new user from logging in, if that user has the same name of an old user:
LAB 4.2
DEPLOYING SINGLE SIGN-ON ON OPENSHIFT
Passthrough vs. re-encryption SSO templates
Typically coming from registry.redhat.io/redhat-sso-7/sso72-openshift:1.0 (or v1.1, v1.2 ...)
To integrate an application with Red Hat's SSO server you define and configure, at minimum, one 'realm', one or more 'clients', and one or more 'users':
Web console:
https://sso-fqdn/auth/admin/bin/kadm.sh
CONFIGURING AN OPENSHIFT IDENTITY PROVIDER FOR SSO
The OpenIDIdentityProvider identity provider allows OpenShift to delegate authentication to a SSO server using the OpenID Connect standard (master-config.yml):
The OpenID Connect API endpoints of your SSO realm, that follow the format: https://<sso-server-fqdn>/auth/realms/<RealmName>/protocol/openid-connect/<operation>
operation = Name of an OpenID Connect API operation, such as 'auth', 'token', and 'userinfo'
LAB 4.3
FINAL LAB 4
5. Controlling the Deployment Environment
REVIEWING SECRETS AND CONFIGMAPS
You can store the registry credentials in a secret and instruct OpenShift to use that secret when it needs to push and pull images from the registry.
SECRETS
Individual secrets are limited to 1MB in size.
Or in YAML:
How to use the above secret in the deployment config:
Another use case is the passing of data, such as TLS certificates, to an application by using the --from-file=file option. This exposes a sensitive file to an application. The pod definition can reference the secret, which creates the secret as files in a volume mounted on one or more of the application containers.
CONFIGURATION MAP
Or in YAML:
Populate the APISERVER environment variable inside a pod definition from the above configuration map:
Encrypting Secrets in Etcd
On master nodes, define the experimental-encryption-provider-config in the
/etc/origin/master/master-config.yamlfile:
Create
encryption-config.yamlfile:vim /etc/origin/master/encryption-config.yaml
Preparing Secrets for Accessing Authenticated Registry
With Quay, you can create robot accounts (tokens) and grant them access to the repositories in an organization. Quay can generate a YAML Kubernetes resource file that you can also use with OpenShift (oc create -f ~/Downloads/myorg-openshift-secret.yml).
Configuring Project Service Account for Image PUSHING
The build process uses the OpenShift builder service account in your project. For the builder service account to automatically use that secret for authentication, link it to the secret:
Configuring Project Service Account for Image PULLING
Configuring OpenShift for Accepting Certificates Signed by a Private CA
Install CA's certificate on each node, under the
/etc/docker/certs.d/<registry_full_host_name>/directory:
And on the master you need to do:
Or:
LIMITING REGISTRIES, PROJECTS, AND IMAGES
system uses the default entry when no other rule matches
insecureAcceptAnything= accept any imagereject= refuse all images (usually set this requirement in the default entry and add specific rules to allow your images)signedBy= accept signed images (provide additional parameters such as the public GPG key to use to verify signatures)Using wildcards or partial names does NOT work!
Under the
transportssection file groups registries by type:docker(Registry v2 API),atomic(OCR),docker-daemon(local daemon storage):
Another example:
Configuring Signature Transports
That above configuration ("type", "keyType", "keyPath") is not enough; you also need to indicate the URL to the web server that stores the detached image signatures. To declare that URL, create a file under /etc/containers/registries.d/ such as:
For the OpenShift Container Registry, that you define with the 'atomic' transport type, you do not need to perform this extra configuration. The OCR has API extensions to store the signatures, and the atomic transport type consumes them.
USING DEPLOYMENT TRIGGERS
For example if it says config,image(redis:latest), there are two types of triggers:
A configuration change trigger causes a new deployment to be created any time you update the deployment configuration object itself.
An image change trigger causes OpenShift to redeploy the application each time a new version of the redis:latest image is available in the registry
When you create an application with the oc new-app command, or from the web interface, OpenShift creates a deployment configuration object with the above two triggers already defined.
This is how it looks like in the YAML:
CUSTOMIZING OPENSHIFT PIPELINES
Clone a Git repository and execute a Maven build and installation:
Submit the code for analysis to a SonarQube instance:
Input command asks the user for confirmation:
Pipeline can also make calls to OpenShift - this rolls out the deployment of the application latest image in the
bookstore-qaproject:
LAB 5.1
SECURITY CONTEXT CONSTRAINTS (SCCs)
SCCs define conditions that a pod must satisfy in order to be created
Similar to policies, which enforce certain actions or prevent others from a service or a user (or service account)
By default resources get
restrictedSCC (no root, mknod, setuid)Create your own SCC rather than modifying a predefined SCC
Control:
Privileged mode - running privileged containers should be avoided at all costs
Privileges escalation - on/off privileges escalation inside a container
Linux capabilities - Linux capabilities to and from your containers (e.g. KILL)
Seccomp profiles - allow or block certain system calls (e.g. CAP_CHOWN)
Volume types - permit or prevent certain volume types (e.g. emptyDir)
System control (Sysctl) settings - modify kernel parameters at runtime
Host resources - permit or prevent a pod for accessing the following host resources: IPC namespaces, host networks, host ports, and host PID namespaces
Read-only root file system - forces users to mount a volume if they need to store data
User and group IDs - restricting users to a certain set of ID or GIDs. Each project gets assigned its own range, as defined by a project annotation, such as
openshift.io/sa.scc.uid-range=1000190000/10000andopenshift.io/sa.scc.supplemental-groups=1000190000/10000./=number of allowed values (e.g. 1000190000 up to 1000200000).SELinux labels - define an SELinux label to the pods
File system groups - allows you to define supplemental groups for the user, which is usually required for accessing a block device
Introducing SCC Strategies
Categories:
Boolean (
Allow Privileged: true)Allowable set (
RequiredDropCapabilites: KILL,MKNOD,SETUID,SETGID)Controlled by a strategy, SCC Strategies: 1.
RunAsAny- any ID defined in the pod definition (or image) is allowed (security issue), no SELinux labels 2.MustRunAsRange- the project or the pod must provide a range within an allowable set, lowest value is defaultRun As User Strategy:
MustRunAs- project or the pod must provide a single value, for example SELinux context
Managing Supplemental Groups - shared storage example (e.g. NFS)
Annotation is used by OpenShift to determine the range for supplemental groups.
I. One way to allow access to the NFS share is to be explicit in the pod definition by defining a supplemental groups that all containers inherit. All containers that are created in the project are then members of the group 100099, which grants access to the volume, regardless of the container's user ID:
II. Another solution is the creation of a custom SCC that defines a range for the group IDs, enforces the usage of a value inside the range, allows the GID 100099:
Managing File System Groups - block storage example (e.g. iSCSI)
Unlike shared storage, block storage is taken over by a pod, which means that the user and group IDs supplied in the pod definition are applied to the physical block device. If the pod uses a restricted SCC that defines a fsGroup with a strategy of MustRunAs, then the pod will fail to run. OpenShift doesn't allocate any GID to block storage, so if the pod definition doesn't explicitly set fsGroup and SCC uses RunAsAny, permission may still be denied! Define a file system group in the pod definition:
Managing SELinux context with SCCs
Restricted SCC defines a strategy of MustRunAs, the project must define the options, such as user, role, type, and level otherwise pod will not be created
At creation time, OpenShift assigns a SELinux type to containers' main process, container_runtime_t
Define the values for the SELinux context in SCC and relationship with the project:
If the pod needs to access a volume, the same categories must be defined for the volume. Define the SELinux context for a pod:
MANAGING SCCS
Custom SCC (only cluster admin can create it):
Managing Service Accounts for SCCs
Service accounts can be member of an SCC, similarly to users. This restricts all resources created by a service account to inherit the restrictions of the SCC. By default, pods run with the default service account, unless you specify a different service account. All authenticated users are automatically added to the system:authenticated group. As such, all authenticated users inherit the restricted SCC:
If a container requires elevated privileges or special privileges, create a new service account and make it member of an existing SCC, or create your own SCC and make the service account member of that SCC. Every service account has an associated user name, so it can be added to any specific SCC.
Creating a custom service account and make it member of the anyuid SCC, which allows pods to use any UID:
LAB 5.2
FINAL LAB 5
6. Managing Secure Platform Orchestration
MANAGING APPLICATION HEALTH
Liveness Probe - is the pod healthy?
Readiness Probe - is the pod ready to serve requests?
MANAGING APPLICATION SCHEDULING
Scheduler filters the list of running nodes by the availability of node resources, such as host ports or memory
A common use for affinity rules is to schedule related pods to be close to each other for performance reasons.
A common use case for anti-affinity rules is to schedule related pods not too close to each other for high availability reasons.
Rules can be: mandatory (required) or best-effort (preferred)
Define 8 nodes, two regions, us-west and us-east, and a set of two zones in each region:
Rule that requires the pod be placed on a node with a label whose key is compute-CA-NorthSouth and whose value is either compute-CA-North or compute-CA-South:
oc label node9 compute-CA-NorthSouth=compute-CA-North
Node selector can be part of pod definition or deployment config (the below triggers new deployment):
Same in YAML:
Node maintenance/availability
MANAGING RESOURCE USAGE
Define limits in a deployment configuration instead of a pod definition
Resource requests - indicate that a pod cannot run with less than the specified amount of resources (essentially requirements)
Resource limits - prevent a pod from using up all compute resources from a node
Managing Quotas
ResourceQuota resource object specifies hard resource usage limits for a project; all attributes of a quota are optional, meaning that any resource that is not restricted by a quota can be consumed without bounds (you can restrict e.g. pods, rc, svc, secrets, pvcs, CPU, memory, storage).
ClusterResourceQuota resource is created at the cluster level, uses openshift.io/requester annotation:
ResourceQuota resource object that defines limits for CPU and memory:
Managing Limit Ranges
LimitRange resource, also called a limit, defines the default, minimum, and maximum values for compute resource requests and limits (also storage - default, min, max capacity requested by image, is, pvc) for a single pod or for a single container defined inside the project. A resource request or limit for a pod is the sum of its containers.
To understand the difference between a limit range and a resource quota resource, consider that a limit range defines valid ranges and default values for a single pod, while a resource quota defines only maximum values for the sum of all pods in a project.
The following listing shows a limit range defined using YAML syntax:
After creating a limit range in a project, all resource create requests are evaluated against each limit range resource in the project. If the new resource violates the minimum or maximum constraint enumerated by any limit, OpenShift rejects the resource. If the new resource does not set an explicit value, and the constraint supports a default value, then the default value is applied to the new resource as its usage value.
LAB 6.1
You can inspect the RESTful API calls:
LAB 6.2
OPENSHIFT SECURITY MODEL
Infrastructure components, such as application nodes, use client certificates that OpenShift generates. Infrastructure components that run in containers use a token associated with their service account to connect to the API.
OpenShift Container Platform creates the PKI and generates the certificates at installation time. OpenShift uses an internal CA (openshift-signer) to generate and sign all the certificates that are listed in the master-config.yaml configuration file. This can be overriden in Ansible:
To override names:
On the master server, there are ~20 certificates and keys that are generated at installation time:
LAB 6.3
FINAL LAB 6
7. Providing Secure Network I/O
ISTIO
Sidecars sit alongside microservices and route requests to other proxies. These components form a mesh network.
Definition of a
VirtualService:
The
DestinationRuleassociated with this virtual service:
MANAGING SECURE TRAFFIC IN OPENSHIFT CONTAINER PLATFORM
OpenShift can manage certificates by generating an X.509 certificate and a key pair as a secret in your application name spaces. Certificates are valid for the internal DNS name service_name.namespace.svc.
The following Red Hat Single Sign-on template shows how the service resource defines the annotation service.alpha.openshift.io/serving-cert-secret-name for generating the certificate with a value of sso-x509-https-secret.
The pod mounts the volume that contains this certificate in /etc/x509/https, as referenced by secretName: sso-x509-https-secret in the volumes section:
Service serving certificates allow a pod to mount the secret and use them accordingly. The certificate and key are in PEM format, and stored as tls.crt and tls.key.
OpenShift automatically replaces the certificate when it gets close to expiration.
Pods can use these security certificates by reading the CA bundle located at /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt, which is automatically exposed inside pods.
If the service certificate generation fails, force certificate regeneration by removing the old secret, and clearing the following two annotations on the service:
The service serving certificates are generated on-demand, and thus are different from those used by OpenShift for node-to-node or node-to-master communication.
Managing Network Policies in OpenShift
3 SDNs:
ovs-subnet- flat network that spreads across all the cluster nodes and connects all the podsovs-multitenant- isolates each OpenShift project. By default, the pods in a project cannot access pods in other projects. The following command allows projectA to access pods and services in projectB, and vice versa:oc adm pod-network join-projects --to=projectA projectB- this give access to all pods & services in project.oc adm pod-network isolate-projects <project1> <project2>- nothing can access between the projectsovs-networkpolicy- To use network policies, you need to switch from the default SDN provider to theredhat/openshift-ovs-networkpolicyprovider. It allows you to create tailored policies between projects to make sure users can only access what they should (which conforms to the least privilege approach). By default, without any network policy resources defined, pods in a project can access any other pod.
To change, edit /etc/origin/master/master-config.yaml and /etc/origin/node/node-config.yaml:
Example: Both networks are separate projects - The following network policy, which applies to all pods in network-A, allows traffic from the pods in network-B whose label is role="back-end", but blocks all other pods:
Example: Both networks are separate projects - The following network policy, which applies to network-B, allows traffic from all the pods in network-A. This policy is less restrictive than the network-A policy, because it does not restrict traffic on any pods on the network-A project:
The following excerpt shows how to allow external users to access an application whose labels match product-catalog over a TCP connection on port 8080:
The following network policy allows traffic coming from pods that match the emails label to access a database whose label is db:
You can also define a default policy for your project. An empty pod selector means that this policy applies to all pods in this project. The following default policy blocks all traffic unless you define an explicit policy that overrides this default behavior:
To manage network communication between two projects, assign a label to the project that needs access to another project:
The following network policy, which applies to a back-end project, allows any pods in the front-end project to access the pods labeled as app=user-registration through port 8080, in this back-end project:
Node & Pod network
OpenShift configures each cluster node with an Open vSwitch bridge named br0. For each pod, OpenShift creates a veth device and connects one end to the eth0 interface inside the pod and the other end to the br0 bridge:
tun0 interface on the node is an Open vSwitch port on the br0 bridge, it is used for external cluster access
OpenShift uses the vxlan_sys_4789 interface on the node, or vxlan0 in br0, for building the cluster overlay network between nodes. Communications between pods on different nodes go through this interface.
CONTROLLING EGRESS TRAFFIC
By default, OpenShift allows egress traffic with no restrictions. You can control traffic with egress fw, routers, static IP. OpenShift allows traffic if no rule matches, checks rules in order.
Egress FWs
This object allows the egress traffic to the 192.168.12.0/24 network, and to the db-srv.example.com and analytics.example.com systems. The last rule denies everything else. The rules only apply to the egress traffic and do not affect inter-pod communication:
Egress Routers
3 modes:
Redirect - image
openshift3/ose-pod(TCP and UDP)HTTP proxy - image
openshift3/ose-egress-http-proxy(HTTP and HTTPS)DNS proxy - image
openshift3/ose-egress-dns-proxy(TCP)
Can be used with any of the three SDN plugins, but underlying hosting platform may need to be reconfigured. Present a unique identifiable source IP address to the firewall and the external service.
Egress router is a particular pod running in your project with two interfaces (eth0, macvlan0). It acts as a proxy between your pods and the external service.
macvlan0 interfaces are special devices that directly expose node interfaces to the container and has a MAC address seen by the underlying network.
In front of each egress router, you need to create an OpenShift service object. You use that service host name inside your application to access the external service through the router.
a/ Example - redirect mode (can only be created by cluster admin & application may need reconfiguration to access external service through egress router):
b/ Example - HTTP proxy mode:
c/ Example - DNS proxy mode:
Enabling Static IP Addresses for External Access
You can define a static IP address at the project level, in the
NetNamespaceobject. With such a configuration, all the egress traffic from the pods in the project originates from that IP address. OpenShift must use theovs-networkpolicySDN plug-in.
OpenShift automatically creates one NetNamespace object per project. First, associate IP with project and then node:
When using the oc patch command to add a new address to a HostSubnet object that already has egress IP addresses defined for other projects, you must also specify those addresses in the egressIPs array:
LAB 7.1
FINAL LAB 7
8. Providing Secure Storage I/O
CATEGORIZING STORAGE TYPES IN OPENSHIFT
Shared storage - GlusterFS, NFS, Ceph..
Block storage - EBS, GCE disk, iSCSI..
Accessing Files in a Shared Storage Type in OpenShift If you need to access the same share from multiple pods, then you must configure each pod to use a default GID and define the group ownership of the share with a known GID:
To enforce that your pod use a group, you must create a service account in each project. Each service account must be assigned to the same security constraint context (SCC) and it must restrict the limitations to a specific GID. Additionally, because the built-in SCC takes precedence over a custom one, you must set a higher priority in the custom SCC (kind: SecurityContextConstraints, priority: XX):
Accessing Files in a Block Storage Type in OpenShift
Any OpenShift cluster can access block storage and even share the contents among pods in the same project. The first pod takes over ownership of the block storage, changing the GID and UID from that share. If any other pod running in the same project tries to access the same persistent volume bound to the block storage, the deployment fails due to lack of permissions. To solve this problem, you must create a security constraint context that configures the fsGroup setting and allows any pod to access the same persistent volume.
LAB 8.1
FINAL LAB 8
9. Configuring Web Application Single Sign-on
Security Assertion Markup Language (SAML) 2.0
OpenID Connect
JWT
Describing the OpenID Connect Authorization Code Flow
The application redirects to the SSO server, which presents a login screen and validates the user's credentials.
On successful authentication, the SSO redirects back to the application providing a 'code'.
The application uses the code to request an access token from SSO server.
The SSO server returns an access token that the application uses to authorize end user's requests and to submit requests to other applications that are clients of the same SSO realm.
CONFIGURING KEYCLOAK ADAPTERS FOR SINGLE SIGN-ON
The core technology of Red Hat's SSO solution is the Keycloak open source project.
DESCRIBING SSO CLIENT ACCESS TYPES
'client protocol' defines whether the application uses SAML 2.0 or OpenID Connect
'access type' defines whether the application is required to authenticate itself or not
'valid Redirect URIs' protects the SSO server from sending tokens to applications other than the ones that initiated an authentication request
LAB 9.1
FINAL COMPREHENSIVE LAB1 - SINGLE CONTAINER APP
FINAL COMPREHENSIVE LAB2 - MULTI-CONTAINER APPS
APPENDIX
To help create objects:
Table of important files:
Path
Purpose
Location
/etc/origin/master/master-config.yml
master config
masters
/etc/origin/node/node-config.yml
node config
nodes
/etc/containers/registries.d/[REGISTRY].yml
where to store new signatures and retrieve existing ones
everywhere
/etc/containers/policy.json
what registries are allowed
everywhere
/etc/docker/certs.d/[URL]/ca.crt
private CAs on each node
everywhere
/etc/pki/ca-trust/source/anchors
automatically trusted CAs
masters
/etc/ipa/ca.crt
root CA of the IdM server
IdM
/var/lib/atomic/sigstore
locally stored image signatures
workstation
Note: To generate beautiful PDF file, install latex and pandoc: sudo yum install pandoc pandoc-citeproc texlive
And then use pandoc v1.12.3.1 to output Github Markdown to the PDF: pandoc -f markdown_github -t latex -V geometry:margin=0.3in -o DO425.pdf DO425.md
For better result (pandoc text-wrap code blocks), you may want to try my listings-setup.tex: pandoc -f markdown_github --listings -H listings-setup.tex -V geometry:margin=0.3in -o DO425.pdf DO425.md
Last updated
Was this helpful?