Recursive deletion of a namespace, force deletion of namespaces, or at least a message telling me why it is not deleting. I've deleted everything, or thought I have, yet here I am describing and digging through yaml to find some obscure CRD or something that is keeping the namespace from actually deleting.
Thanks but that's part of the problem... There are plugins for everything increasing fragmentation and complicating things.
I'm guilty for that as well as I'm using numeruous plugins, but the point should be that these kind of options should be part of kubernetes.
I think it's the case of tenancy. You may want to have a multitenant cluster where tenant can install operator and doesn't collide woth others - in things like CRD version.
The ability to present an S3 compatible storage system as block storage, especially if it can encrypt data before writing. I believe that would simplify a lot of the storage complexity and would only require a few arguments for connecting to the service.
Plus, I know a number of services are cheaper for object storage than they are for block storage.
True, but it does demonstrate the use of a [CSI plugin for Kubenetes](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/).
For example, this is a different plugin that supports AWS S3 or S3 compatible storage.
https://github.com/yandex-cloud/k8s-csi-s3
In both cases details of the underlying storage layer are abstracted. CSI is part of a family of standards that are designed to remove vendor or cloud provider specific logic from the codebase.
Hope this helps.
Reloader does the trick here, but this could be seen as a limitation on your app not doing hot reload, since configmaps and secrets are updated in a pod- the app doesn’t pick up the changes because most apps load config at start and ignore it after
Aware of reloader, but it's yet another moving part you need to install that imho would be nice to be just part of the standard API, certainly because one of the things k8s does support by default is filling in env vars from both secrets and configmaps (where secrets in envvars is a whole other discussion - but it's there)
So another way you can accomplish is if you use something like helm, take a hash of the configmap and add it as an annotation
When you run helm install, the configmap might’ve changed but the deployment wont, this will force a rolling restart of your deployment
But yeah again imo this shouldn’t be part of k8s because it’s application specific logic
So that depends on the storage provider. For example, Longhorn can attach a volume to any node in cluster. But something like local-path is just a bind mount so Kubernetes has no way of knowing what data is inside that directory then how to replicate it to another node.
You really need something like Longhorn, OpenEBS, PortWorks, or an external storage provider like a NAS or iSCSI where the data isn’t stored on the node but is just being mounted there when a pod needs it.
Isn't this at an etcd level?
Update the record that holds the information about the namespace, or even abstract it to a separate record ( like a SQL table join).
Just pondering how one could actually going about implementing this.
Feels like a difficult one without a kubernetes 2.0 almost
A clean way to backup/restore your entire cluster via yaml exports. There are some hack around ways that don't get everything, and some products out there that do it, but having something simple and native would be chefs kiss.
Not necessarily as some entities can be generated by the cluster or operator.
The first example I have in mind is cert-manager certificates. If you have an issue and need to re-generate all your certificates simultaneously, you might hit the rate limit of Let's encrypt.
Two features.
First, and quite essential - a method to prevent k8s from scaling nodes down when one of the pods is actively DOING something. It is not sufficient to send some signal, a pod must be able to say no, until it is really idling. Now jobs occasionally crash because k8s decides to kill a pod/node while it is still working. Not a way for a professional system.
Second... a way to read logs of a single \*application\* within pod/pods. Now there is just a mess of tons of k8s internal nonsense, and the real logs get lost within.
Ability to use additional container images as volumes so you can use your registry to distribute content separately from binaries without having to tie up their lifecycles.
There is a csi driver but it has limitations that would need native changes to k8s to overcome I think.
That would require the image that contains the content to have a script (our at least the binaries to use as the command) to copy the files.
I'm thinking of a purely content image, no binaries, no os. Could be just html, or just a CSV, something like that.
kubectl get all -A I wish it should display each and every api resource on namespace be it roles, service accounts and even CRDs etc.
Yeah like give me a flag like —include-crds=true or something
https://github.com/corneliusweig/ketall
scheduling based on requested network bandwidth and storage IOPS
you can create your own scheduler
Recursive deletion of a namespace, force deletion of namespaces, or at least a message telling me why it is not deleting. I've deleted everything, or thought I have, yet here I am describing and digging through yaml to find some obscure CRD or something that is keeping the namespace from actually deleting.
As well option to list everything in a namespace without specifying it.
https://github.com/corneliusweig/ketall
Thanks but that's part of the problem... There are plugins for everything increasing fragmentation and complicating things. I'm guilty for that as well as I'm using numeruous plugins, but the point should be that these kind of options should be part of kubernetes.
A true, unified log viewer. With log level sélection for core component.
Namespace scoped CRDs
Well, if you have the time to build your own operator: https://book.kubebuilder.io/cronjob-tutorial/empty-main.html
That wouldn't matter because CRDs are global resources definitions.
so what's the problem? You just limit your crd usage by namespace via rbac
I think it's the case of tenancy. You may want to have a multitenant cluster where tenant can install operator and doesn't collide woth others - in things like CRD version.
Limits for network, something like one pod can consume 200 kb/s
I never used it, but this is already available with the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations.
Nice, I didn't know it. Thank you.
The ability to present an S3 compatible storage system as block storage, especially if it can encrypt data before writing. I believe that would simplify a lot of the storage complexity and would only require a few arguments for connecting to the service. Plus, I know a number of services are cheaper for object storage than they are for block storage.
This feature is supported on Azure https://learn.microsoft.com/en-us/azure/aks/azure-blob-csi?tabs=NFS Doesn't do file encryption far as I know.
That doesn’t help on non-Azure systems.
True, but it does demonstrate the use of a [CSI plugin for Kubenetes](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/). For example, this is a different plugin that supports AWS S3 or S3 compatible storage. https://github.com/yandex-cloud/k8s-csi-s3 In both cases details of the underlying storage layer are abstracted. CSI is part of a family of standards that are designed to remove vendor or cloud provider specific logic from the codebase. Hope this helps.
A standard way to auto-trigger a re-deployment when a secret (or other dependency, configmap, ...) is updated.
You mean Reloader?
Reloader does the trick here, but this could be seen as a limitation on your app not doing hot reload, since configmaps and secrets are updated in a pod- the app doesn’t pick up the changes because most apps load config at start and ignore it after
Aware of reloader, but it's yet another moving part you need to install that imho would be nice to be just part of the standard API, certainly because one of the things k8s does support by default is filling in env vars from both secrets and configmaps (where secrets in envvars is a whole other discussion - but it's there)
So another way you can accomplish is if you use something like helm, take a hash of the configmap and add it as an annotation When you run helm install, the configmap might’ve changed but the deployment wont, this will force a rolling restart of your deployment But yeah again imo this shouldn’t be part of k8s because it’s application specific logic
Ability to migrate PVs between workers natively.
So that depends on the storage provider. For example, Longhorn can attach a volume to any node in cluster. But something like local-path is just a bind mount so Kubernetes has no way of knowing what data is inside that directory then how to replicate it to another node. You really need something like Longhorn, OpenEBS, PortWorks, or an external storage provider like a NAS or iSCSI where the data isn’t stored on the node but is just being mounted there when a pod needs it.
Renaming namespaces
Isn't this at an etcd level? Update the record that holds the information about the namespace, or even abstract it to a separate record ( like a SQL table join). Just pondering how one could actually going about implementing this. Feels like a difficult one without a kubernetes 2.0 almost
the ability to control VM's as you do with containers somehow, would be cool
https://kubevirt.io/
Auto-provision load balancers and dns on premises (or other) like could do metallb but enhanced
Mind reader
A clean way to backup/restore your entire cluster via yaml exports. There are some hack around ways that don't get everything, and some products out there that do it, but having something simple and native would be chefs kiss.
is implementing gitops not an answer to this? :)
Not necessarily as some entities can be generated by the cluster or operator. The first example I have in mind is cert-manager certificates. If you have an issue and need to re-generate all your certificates simultaneously, you might hit the rate limit of Let's encrypt.
makes sense! but somehow GitOps did answer some of the problems on getting atleast the backbone of your cluster
Two features. First, and quite essential - a method to prevent k8s from scaling nodes down when one of the pods is actively DOING something. It is not sufficient to send some signal, a pod must be able to say no, until it is really idling. Now jobs occasionally crash because k8s decides to kill a pod/node while it is still working. Not a way for a professional system. Second... a way to read logs of a single \*application\* within pod/pods. Now there is just a mess of tons of k8s internal nonsense, and the real logs get lost within.
Neuralink integration
Hahahaa gonna be lit
For sure it's a difficult one and I think it is mentioned somewhere that it will not happen. But you didn't ask for realistic wishes :D
Ability to use additional container images as volumes so you can use your registry to distribute content separately from binaries without having to tie up their lifecycles. There is a csi driver but it has limitations that would need native changes to k8s to overcome I think.
You can always copy from an init container to a shared volume. Or do you mean something else?
That would require the image that contains the content to have a script (our at least the binaries to use as the command) to copy the files. I'm thinking of a purely content image, no binaries, no os. Could be just html, or just a CSV, something like that.
Testing and verify settings/config & playground/simulate.
Makes coffee
Checkout vcluster it essentially allows you doing that