View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0017813 | CentOS-8 | kernel | public | 2020-10-24 01:21 | 2020-10-28 16:23 |
Reporter | kallisti5 | Assigned To | |||
Priority | normal | Severity | tweak | Reproducibility | always |
Status | resolved | Resolution | fixed | ||
Product Version | 8.2.2004 | ||||
Summary | 0017813: Add config_cfs_bandwith to aarch64 kernel builds. | ||||
Description | This is a sister bug to: https://github.com/raspberrypi/linux/issues/3387 I originally reported this to CRI-O here: https://github.com/cri-o/cri-o/issues/4307 Essentially, without config_cfs_bandwith enabled in the kernel, docker/cri-o/kubernetes doesn't function on CentOS 8 aarch64 images due to '/sys/fs/cgroup/cpu/cpu.cfs_quota_us' missing. | ||||
Additional Information | Linux chaos 5.4.60-v8.1.el8 #1 SMP PREEMPT Sun Aug 23 02:58:38 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux Installation Image Used: https://people.centos.org/pgreco/CentOS-Userland-8-stream-aarch64-RaspberryPI-Minimal-4/ | ||||
Tags | aarch64, ARM | ||||
I've been uploading updated kernels to https://people.centos.org/pgreco/rpi_aarch64_el8/ so if you add it as a repo, you can update to 5.4.72, which should have that change already included. Let me know how that goes |
|
Nice!!! Yup. That fixed things. [root@chaos ~]# uname -a Linux chaos 5.4.72-v8.1.el8 #1 SMP PREEMPT Thu Oct 22 11:53:35 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux That confirms you can run Kubernetes via cri-o on the Raspberry Pi 4 (4GiB model) under CentOS :-) ``` [root@chaos ~]# kubeadm init --ignore-preflight-errors=NumCPU --config /tmp/kubeadm-init-args.conf W1024 21:54:53.191074 1475 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.3 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [chaos kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.70] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [chaos localhost] and IPs [192.168.1.70 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [chaos localhost] and IPs [192.168.1.70 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 31.005165 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node chaos as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node chaos as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: leule6.h6twn26dy3nz1qos [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: ``` |
|
Glad that it worked!! and thanks for reporting back! | |
Date Modified | Username | Field | Change |
---|---|---|---|
2020-10-24 01:21 | kallisti5 | New Issue | |
2020-10-24 01:21 | kallisti5 | Tag Attached: ARM | |
2020-10-24 01:21 | kallisti5 | Tag Attached: aarch64 | |
2020-10-24 16:29 | toracat | Status | new => acknowledged |
2020-10-24 17:14 | pgreco | Note Added: 0037819 | |
2020-10-24 21:59 | kallisti5 | Note Added: 0037820 | |
2020-10-25 13:58 | pgreco | Status | acknowledged => closed |
2020-10-25 13:58 | pgreco | Resolution | open => fixed |
2020-10-25 13:58 | pgreco | Note Added: 0037821 | |
2020-10-28 16:23 | toracat | Status | closed => resolved |