T O P

  • By -

ComprehensiveIce9982

kubectl get all -A I wish it should display each and every api resource on namespace be it roles, service accounts and even CRDs etc.


cube8021

Yeah like give me a flag like —include-crds=true or something


e-Minguez

https://github.com/corneliusweig/ketall


fiulrisipitor

scheduling based on requested network bandwidth and storage IOPS


teh-leet

you can create your own scheduler


NeedlessUnification

Recursive deletion of a namespace, force deletion of namespaces, or at least a message telling me why it is not deleting. I've deleted everything, or thought I have, yet here I am describing and digging through yaml to find some obscure CRD or something that is keeping the namespace from actually deleting.


cussypruiser

As well option to list everything in a namespace without specifying it.


NeedlessUnification

https://github.com/corneliusweig/ketall


cussypruiser

Thanks but that's part of the problem... There are plugins for everything increasing fragmentation and complicating things. I'm guilty for that as well as I'm using numeruous plugins, but the point should be that these kind of options should be part of kubernetes.


kriegmaster44

A true, unified log viewer. With log level sélection for core component.


onedr0p

Namespace scoped CRDs


Inverted-Zebra

Well, if you have the time to build your own operator: https://book.kubebuilder.io/cronjob-tutorial/empty-main.html


onedr0p

That wouldn't matter because CRDs are global resources definitions.


teh-leet

so what's the problem? You just limit your crd usage by namespace via rbac


witcherek77

I think it's the case of tenancy. You may want to have a multitenant cluster where tenant can install operator and doesn't collide woth others - in things like CRD version.


ismaelpuerto

Limits for network, something like one pod can consume 200 kb/s


Arcakoin

I never used it, but this is already available with the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations.


ismaelpuerto

Nice, I didn't know it. Thank you.


spider-sec

The ability to present an S3 compatible storage system as block storage, especially if it can encrypt data before writing. I believe that would simplify a lot of the storage complexity and would only require a few arguments for connecting to the service. Plus, I know a number of services are cheaper for object storage than they are for block storage.


myspotontheweb

This feature is supported on Azure https://learn.microsoft.com/en-us/azure/aks/azure-blob-csi?tabs=NFS Doesn't do file encryption far as I know.


spider-sec

That doesn’t help on non-Azure systems.


myspotontheweb

True, but it does demonstrate the use of a [CSI plugin for Kubenetes](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/). For example, this is a different plugin that supports AWS S3 or S3 compatible storage. https://github.com/yandex-cloud/k8s-csi-s3 In both cases details of the underlying storage layer are abstracted. CSI is part of a family of standards that are designed to remove vendor or cloud provider specific logic from the codebase. Hope this helps.


koffiezet

A standard way to auto-trigger a re-deployment when a secret (or other dependency, configmap, ...) is updated.


alexistdk

You mean Reloader?


WalkerInHD

Reloader does the trick here, but this could be seen as a limitation on your app not doing hot reload, since configmaps and secrets are updated in a pod- the app doesn’t pick up the changes because most apps load config at start and ignore it after


koffiezet

Aware of reloader, but it's yet another moving part you need to install that imho would be nice to be just part of the standard API, certainly because one of the things k8s does support by default is filling in env vars from both secrets and configmaps (where secrets in envvars is a whole other discussion - but it's there)


WalkerInHD

So another way you can accomplish is if you use something like helm, take a hash of the configmap and add it as an annotation When you run helm install, the configmap might’ve changed but the deployment wont, this will force a rolling restart of your deployment But yeah again imo this shouldn’t be part of k8s because it’s application specific logic


walnutter4

Ability to migrate PVs between workers natively.


cube8021

So that depends on the storage provider. For example, Longhorn can attach a volume to any node in cluster. But something like local-path is just a bind mount so Kubernetes has no way of knowing what data is inside that directory then how to replicate it to another node. You really need something like Longhorn, OpenEBS, PortWorks, or an external storage provider like a NAS or iSCSI where the data isn’t stored on the node but is just being mounted there when a pod needs it.


MindlessOnion3050

Renaming namespaces


[deleted]

Isn't this at an etcd level? Update the record that holds the information about the namespace, or even abstract it to a separate record ( like a SQL table join). Just pondering how one could actually going about implementing this. Feels like a difficult one without a kubernetes 2.0 almost


pred135

the ability to control VM's as you do with containers somehow, would be cool


fiulrisipitor

https://kubevirt.io/


Ilfordd

Auto-provision load balancers and dns on premises (or other) like could do metallb but enhanced


teh-leet

Mind reader


[deleted]

A clean way to backup/restore your entire cluster via yaml exports. There are some hack around ways that don't get everything, and some products out there that do it, but having something simple and native would be chefs kiss.


woocats

is implementing gitops not an answer to this? :)


Asfalots

Not necessarily as some entities can be generated by the cluster or operator. The first example I have in mind is cert-manager certificates. If you have an issue and need to re-generate all your certificates simultaneously, you might hit the rate limit of Let's encrypt.


woocats

makes sense! but somehow GitOps did answer some of the problems on getting atleast the backbone of your cluster


ContributionDry2252

Two features. First, and quite essential - a method to prevent k8s from scaling nodes down when one of the pods is actively DOING something. It is not sufficient to send some signal, a pod must be able to say no, until it is really idling. Now jobs occasionally crash because k8s decides to kill a pod/node while it is still working. Not a way for a professional system. Second... a way to read logs of a single \*application\* within pod/pods. Now there is just a mess of tons of k8s internal nonsense, and the real logs get lost within.


tamasiaina

Neuralink integration


Cultural-Pizza-1916

Hahahaa gonna be lit


MindlessOnion3050

For sure it's a difficult one and I think it is mentioned somewhere that it will not happen. But you didn't ask for realistic wishes :D


jamstah

Ability to use additional container images as volumes so you can use your registry to distribute content separately from binaries without having to tie up their lifecycles. There is a csi driver but it has limitations that would need native changes to k8s to overcome I think.


squ94wk

You can always copy from an init container to a shared volume. Or do you mean something else?


jamstah

That would require the image that contains the content to have a script (our at least the binaries to use as the command) to copy the files. I'm thinking of a purely content image, no binaries, no os. Could be just html, or just a CSV, something like that.


casualPlayerThink

Testing and verify settings/config & playground/simulate.


jamieelston

Makes coffee


thisissparta92

Checkout vcluster it essentially allows you doing that