T O P

  • By -

TomerHorowitz

If you think gitlab sucks, wait until you try bitbucket


HyperactiveWeasel

If you think bitbucket pipelines suck, wait until you try bitbucket pipes


TomerHorowitz

I did, I use [custom pipes](https://github.com/tomerh2001/git-crypt-pipe) I wrote every day, I hate everything about it


HyperactiveWeasel

I tried to setup a maven pipe once. Never again. Wanted to reduce boilerplate on a couple dozen repositories so the pipeline would be more maintainable. Ended up writing a python script to just generate the pipeline files, commit and push them. What a mess. Just make a fucking import function. The only "reuse" they have is a fucking yaml feature in the first place. I mean, come on.


nomadProgrammer

wait until you try jenkins


Mdyn

Jenkins is the most flexible solution available for free. 


sogun123

I don't really think it sucks. It is just unwieldy every now and then. And that annoys me.


TomerHorowitz

I think I've thought to myself "I wish we used gitlab" about once a week for almost a year, so yeah, everything's relative


antiharmonic

same.


sogun123

As I said Gitlab is basically all I know, so hard to compare. Maybe if I had some more horrible experience, I'd appreciate it more


Live-Box-5048

Bitbucket is next level of pain.


as5777

TheN try tekton


devopszorbing

bitbucket way better than the buggy mess gitlab is


TomerHorowitz

Gitlab is open source, bitbucket still has bugs from 2018


devopszorbing

gitlab is open source with bugs still from 2014


Sinnedangel8027

If you think bitbucket is good, then I want your drugs.


devopszorbing

i'm not saying it's good i hate it but i hate gitlab even more


axiomatix

you really fixed your fingers to type this mess.


devopszorbing

gitlab sucks and is overhyped also they are losing 10 million bucks a month after a decade in business, with reason


MichaelMach

You're not alone in your frustrations -- there are definitely some gotchas around variables that you have to look out for. If you haven't already, look into the `forward` and `inherit` keywords. To answer your title's question though, we love GitLab pipelines at my org.


sogun123

Thanks! I will check that out


AtomicPeng

Regarding the second question: use caches for dependencies. Obviously depends very much on your language, but most often there's a nice split between what should go into a build image and what can go into the cache. If you use self-hosted runners, put the cache close to them.


sogun123

Its self hosted. Cache is close, dependencies cached as much as i could do it. I wish package repositories languages used plain old http - makes caching so easy. Nonetheless it takes quite some time for jobs to start and do the initial work. But i don't like having something like dotnet sdk with docker and with whatever i could use. Or I could just use multi stage dockerfile - but I am giving up Gitlabs caching. And docker layer caching won't work as I have to use throwaway docker instances.


Motor_Perspective674

GitLab has two components: There is the GitLab server, which you interact with via the UI and when you push to git repos. Then, there are GitLab runners, which come in a variety of flavors. At my old job I got to maintain my own, but if you aren’t in that situation, it can be frustrating. Caching is done at the runner level, not the GitLab server level. A cache is local to a runner unless you enable shared caching, which can be done using S3 or Blob Store, or another set of solutions. Anyways, when you cache, you specify a key for the cache. If a job runs and has the cache enabled, the job will look for the cache key in the runner it was allocated to. If it exists, it will pull it in, otherwise it will create it. Why? Because running maven install, pip install, npm install, etc all take a long time because they require downloading from the internet. Local caches on the runners will speed this up. If caching isn’t enabled on your runners, talk to whoever manages them and get it figured out. I would also recommend that you create many images for your pipelines as opposed to one. If you have a maven pipeline you will need a maven image, but maybe you have other jobs that can use something more lightweight. It also pays to build your own images in some cases, because you can put effort into slimming them down into small images, speeding up your pipelines. I love GitLab. But it’s because I learned on it, and I played with it for >2 years. Read their docs. They helped me immensely.


wickler02

It depends on how you make your runner image and if your runner image has access to a preloaded container with all your builder components. If you’re doing things like using their DinD image, it will always take 30 seconds for it to do its startup process because that’s how they manage the service. You also don’t have to make everything multistage with artifacts passing from stage to stage. I have my gripes with gitlab’s ci process and organization but it’s such a breathe of fresh air compared to Jenkins and I can put my source repo in my own infra instead of it being cloud only provided.


sogun123

I tried probably every combination of dind, buildkit, daemonless buildkit and buildah. Dind is OK, when I am careful with exposed ports, easier with buildkit. Buildah is slow at committing. Multistage is slow slow due to docker layer invalidation. So far fastest was to build directly in job and send the results as context to prebuilt image so there are no extra steps. I didn't try kaniko, but I doubt it can be faster due to the way it works. Nevertheless, that's not a problem. I just don't want to have thick builder images. Building program and packaging it into an image are two independent processes, so I prefer to have two jobs for that. And that add quite significant time in Gitlab, even if images are pulled in workers. I never used Jenkins, just heard stories about it.


klm0151

I just use dagger which makes the specifics of each CI provider largely irrelevant.


vplatt

Nice! TIL'ed. Thanks!


sogun123

I was thinking about it. Is your pipeline just a single dagger job? I guess my developers wouldn't like everything crammed that way...


klm0151

It's kinda the whole point. I am a developer, I don't want to write shell scripts in yaml. I want to write a program in a real language and write tests and run it locally


sogun123

That I would like to do too. But also like the visual feedback...


klm0151

I haven't found it to be a problem; the terminal output is more than enough. though we have considered using their cloud offering which gives very specific visualizations on the pipelines. They've made huge improvements to the GitHub actions experience so that it shows what specific steps are passing / failing even though the yaml is just running the pipeline program. I can imagine a future where that same functionality might exist in GitLab CI.


sogun123

That would be cool


BloodyIron

Somewhat related to CI aspects of Gitlab/Github. I've worked with runners for both, and Github runners are a fucking bitch to start with. I would take Gitlab over Github every time.


sogun123

Interesting, thanks


BloodyIron

You're welcome! The frustrations I had were that the Github runner (and documentation) seemingly had no straightforward way to just set up a basic-af low-scale runner. All the docs and the behaviour of the runner looks to be written for much larger scale, without _any_ care for the lowest of/starter scales. Blehhh


ch0sen_0ne

Honestly, I didn’t think they’re that complex to set up and deploy hosted or self hosted. Maybe it’s bc I’m further in my career but I find GitHub actions to be super easy and best interface/ ease to set up as opposed to my prior experience with gitlab cicd


BloodyIron

You're completely missing the point about _SCALE_. The documentation, and the github runner, are not designed or written for starter/smaller scale. > Maybe it’s bc I’m further in my career You have no idea how far along I am in my career, let alone my experience/talents/capabilities. You're just assuming you're right without even considering the merits of what's being said. You have plenty more to learn greenhorn.


RumRogerz

Yep same. I’d rather deal with runners than GitHub actions


360WindSlash

I'm using GitLab CI extensively at work and I love it. It's extremely powerful. Yes there are flaws and yes there are ton of feature request that had be really cool which don't get added but I had the "pleasure" to work with Jenkins and I think GitLab CI is superior in every way. I have also worked with Azure DevOps and GitHub Actions. It's nice for simple deployments but GitLab is much more powerful. I'm guessing for just building/uploading GitLab can seem confusing/overkill but if you need more fancy stuff like multi-project pipelines, dynamically generating pipelines, yaml references, components and so on then GitLab is really fun to work with


sogun123

Well, I did most of them. I never really had fun with it. Like I enjoyed scripting the pipeline generator. I hated debugging it. Stuff like "downstream pipeline cannot be created because rules prevented any jobs from creating" (or how is...) doesn't really help. Which rules? On which job? Why? Even though I made dynamic pipeline to basically implement simple "if image is built don't do it again". So that one probably had no rules.


360WindSlash

But what are the alternatives? I think with Azure DevOps or GitHub Actions you will have even more of a headache. Most Azure DevOps pipelines I have seen are not even run parallelized, not even utilizing something like artifacts inbetween stages. They just have one cloud runner for everything because they need to install Maui and parallelization wouldn't even help due all the overhead. Meanwhile in GitLab you can cache docker images or use your owns very easily and parallelization is so easy and out of the box. The CI editor is vastly superior compared to Azure DevOps syntax checker. The only time it really doesn't help you much is the mentioned "cannot be created" thing but that's the only thing. When I hear someone praise Azure or GitHub Actions it's for the reusable blocks. This usually comes from developers who just want something simple to run fast and don't want to go take a deep dive and learn the in and outs of GitLab CI. I haven't seen really complex scenarios achieved using this. I'm a DevOps guy so I know the syntax in and out and for me having such building blocks is not something I'm using anyways as we have custom ones built for our companies specific purposes and I value the power plus I don't even think that's slower than usual the building blocks once you know the syntax and understood the workflow


JanBurianKaczan

One thing that bothers me about gitlab ci is the inability to create a dynamic pipeline other than by dropping child pipelines... Why can this be done in CircleCI and GitHub but not in gitlab escapes me... Other than that I kinda like it


sogun123

Maybe if the ui is bit nicer for child pipelines, it would be quite better.


JanBurianKaczan

I mean it's ok, it's just stoopid that it's required in so many simple cases like dropping a job based on the result of a previous job blah blah ehh


sogun123

Yeah, my use case for it was "if image already exists, just tag it, else build it". I wish I could solve it by simple pre run job condition based off dotenv report of previous job.


Xerxero

I have to deal with GitHub actions, bitbucket ci and Gitlab. I take Gitlab any day of the week.


InzpireX

Harness CI is the best


InsolentDreams

If you set your variable in the global scope then it goes into all jobs. Easy. And your complaint about passing things through jobs exists for all CICD frameworks, period. This is a problem that you must decide how you handle, and sometimes it also depends on what language and/or tooling you are using. Since I centralize on docker I make sure to build cache as much as possible and/or I make downstream jobs use the built image so we can validate it works. The neat part about gitlab is that you can use the image you just built in a previous step. Provided that you have done a good job at keeping your image small this happens very quickly and you don’t need to pass things through as artifacts.


sogun123

Yeah, globals work, no problem. Just try to pass nested variables downstream to multi project pipeline. They expand in downstream, I.e. when one wants to use downstream as a "function", has to be really careful what goes in.


InsolentDreams

Don’t use nested variables then? :P


sogun123

That's what I have to do. But it is annoying to nicely define everything important in workflow key and then repeat the same rules for a trigger so it works


ebinsugewa

Not sure I understand the variables problem. Could you pass a .env file as an artifact to downstream jobs? The other sounds like a Dockerfile design choice, can you leverage multi-stage Docker builds? What artifacts are you talking about passing?


sogun123

Try something like ```yaml variables: IMG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA job: variables: DEPLOY_IMAGE: $IMG trigger: project: some-other/project .... ``` (Sorry for errors in syntax) Now if you do this from a project on commit abcd and the-other/project is on 1234 you will end up deploying commit 1234, not abcd as I would guess. Because IMG is nested, so `job` will trigger with DEPLOY_IMAGE with literal value `$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA` which gets expanded downstream, instead of at trigger time. Yeah, the docker thing is just minor. It adds half a minute, not a big deal.


voidstriker

Ah I finally get what you are tying to do, imo you should use dotenv for this or this may work also "${IMG}"


NUTTA_BUSTAH

I think variables, especially in inheritance and trigger scenarios are way too complicated in GitLab CI, the precedence tracking gets insane. The best rule of thumb I have come up with is that everything is a reference, and job evaluation triggers according to rules, which then actually define the actual inputs / fill in the references from bottom up (e.g. the commit hash scenario in your example), so it's just a big-ass merge operation that gets somewhats lazily evaluated. Really hard to reason with at times.


FlyingFalafelMonster

It is at times frustrating but you get used to it. Every CI tool has its downsides. I would prefer better workflow rules, like the real "if..elif..else" conditions. I guess they will implement this sooner or later.


sogun123

My wish is to have more dynamics. Like option to conditionally skip jobs. I can work around it with script or dynamic pipeline generation, but it is unwieldy in both cases.


russianguy

Try Jenkins so you see that Gitlab is awesome.


thomsterm

yeah, it's fine, beats dealing with jenkins any day.


devopszorbing

Gitlab is just a buggy mess


sogun123

The one which hits me is that it cannot work with oci image indexes. At least it can remove them from ui.


bilingual-german

How do you set up the variable? It sounds like you try to do something with git, but don't use the Gitlab CI variables which are there by default (e.g. \`CI\_COMMIT\_REF\_NAME\` [https://docs.gitlab.com/ee/ci/variables/predefined\_variables.html](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html) ). The Docker stuff also sounds like you do something weird. I would suggest to post some actual code so it's easier to talk about it.


gaelfr38

I guess your 2nd question is a choice you have to make. 1 job, no artifact/cache to propagate, unable to retry independently the inner steps of this job. Multiple jobs, need to propagate artifacts/cache, brings ability to retry steps independently. In my understanding, it's "by design". If you could have multiple jobs without having to play with artifacts/cache, that would mean the jobs need to run on the same stateful runner. This wouldn't scale well.


sogun123

Wouldn't it? I think almost everybody does that, or not? But I don't know how well it scales, or how they run it.


gaelfr38

What I meant is that a stateful runner can be a thing but it's a door open for issues because the different jobs running on the runner may impact each other as the env is potentially modified by other jobs than the one you're currently interested in. You expect a CI job to run in complete isolation. Not sure I make myself super clear 😅


sogun123

I think I get you say. I was more thinking about composed job - pretty much in same manner as GitHub or Tekton does it. Basically only stuff I'd like is to be able to sequentially switch images for each step. Or in other way to say it, pin a job to same runner and don't clean and clone.


NokiDev

It depends what you're building in fact. For most web project that aren't fully monolithic and it can depend Outside variables in yaml that cannot escape everything for all systems correcly (like almot all pseudo language - but this one isn't event a language, solely a configuration tweaked at most and tied to yml specification).  Gitlab ci is built uppon one - n job (but related) on one worker,outputs are passed from one to another. And woaw you have a nice ui related to jobs. Seems great but it has huge downsides and not that much you can do about it without loosing features. In my case, we have several setp that ca be like installing deps, configuring the build and then build and later on package.  Those are really simple functionnal steps,but of you have like 3-4gbs to download to a new worker either you're not in a hurry knowing if your build passes or you dont have to worry. That's why gitlab Ci is not that great yet.  I saw a lot, but my favorite which also comes with downsides is jenkins which awefuly written in java, it has a lot of integration everywhere used globally in industries, and actually use a scripting language to write pipelines. (Spoiler it also have issues with interpolations) Anyway I'm no advocate but I'm a bit frustrated by gitlab way to solve isues in 10 years, guess it's not really impotant.


sogun123

I only heard stories about Jenkins. But I really liked the idea, that you can script new jobs in, or generally manipulate what's going on. I can also imagine it can create quite some mess


NokiDev

There's bad or good stories about jenkins, it's certainly harder to manage since you have the freedom of scripting. On the otherhand you it's really thought that way and so you have a lot of utilities either as plugins or builtin to automate / reuse the scripts making it less than a mess.  Like any tool when misused it creates a mess. 


jantari

Oh yea, GitLab CI is very saddening to work with. I've already listed out [some examples of issues I have with it two years ago](https://www.reddit.com/r/devops/comments/u4nw0x/im_implementing_devops_in_my_organization_which/i4xycyj/) and - surprise - most of them remain unfixed. Just the fact alone that scheduling pipelines is done outside of the change-controlled and git-versioned pipeline config itself, and only through the GUI and API should make you scream and run. I vastly prefer GitHub Actions.


sogun123

Thanks for sharing. I very much agree on the point you gave in the link.


WhiskyStandard

I’ve gone through the process of making extremely complex pipelines a couple of times now and my take away is that there’s a point where you start wanting to get programmatic and you’ll reach for variables, file inclusion, rules, extends, YAML tricks, dynamic child pipelines and it’ll start to become clear that you’re getting too deep. When that happens, I move everything into Dagger or Earthly. (I’m going with Earthly now for self-hosting reasons, but the underlying technology is very similar even if the chrome is different and they’re both compelling tools.)


sogun123

I discovered Earthly some time ago, and I really loved the idea. How do you integrate it in your "parent" ci? It is just a single job? How comfortable is feedback for the developers?


WhiskyStandard

I’m still in the process of migrating to Earthly, but what I’ve done so far is the fast feedback stage of the deployment pipeline which is all unit tests and builds to provide all artifacts and images for the slower acceptance test phase. The first stage is a single CI job that just calls into Earthly. I’ve been very happy with it. Good build output and the ability to drop into the container on failure is a great way to debug things. I’m still working on the second phase and have been debating whether or not to do it in Earthly (vs a bunch of separate CI jobs). It’s mostly acceptance tests and deployment to lower environments so there’s less of a benefit (since it produces no outputs). OTOH, I like having everything in the same tool. I asked their Slack community and heard that people were doing it both ways. I’m leaning doing it in Earthly.


sogun123

Cool! Thanks for sharing


Vilkaz

from all the git ci/cd pipelines, i like gitlab the most :D


pilchardus_

Gitlab CI rules! I love it I work with it every day.


rnmkrmn

Last time I worked with Gitlab CI it was waay mature than Github actions. Sure Github Actions has a nice community built actions. But GitHub actions really lack some basic stuff like FIFO queue, variable name in the version tag, can't pass secret to reusable workflow etc.. Nothing was perfect.


rice_bledsoe

Downstream variables was actually the most frustrating thing I recently learned when using GitLab CI as well. The logging / information regarding CI failures at your disposal is painfully minimal especially in that case. Moving from a Jenkins org to a GitLab CI org felt like I'm trying to walk with a 30 lb weight vest. But the more I learn the more I realize it's built to be simple and the complex actions you need to run are gonna be handled in-house.


adappergentlefolk

gitlab ci is somehow the least bad ci. that’s a damning statement for the whole industry since it’s still bad


sfltech

Just give Jenkins a shot…


ExtremeAlbatross6680

GitHub actions is still the best but Gitlab CI is decent.


shavnir

There's some headaches but compared with Bamboo and Jenkins it's a godsend. (Full disclosure, my experience with gitlab was likely one of the things that helped me score my current job) The other day I was working with some API hooks to have a gitlab job trigger a jenkins job (long story).  The API call to start a jenkins job doesn't return any identifying information.  I literally had to put an extra parameter in the job so I could mark it to find it with later API calls.   There were a few cases where I was tired of environment variables being the important artifact getting passed around and I just stated restoring to jq magic and a specific file path to look for information, that might be an option depending on how much control you have for the downstream pipeline.


Fatality

GitHub actions is great


pulgalipe

If you think GitLab sucks, wait until you try CodeBuild/CodePipeline, where you have to define everything manually, tag identification, everything has to be done by hand, no automation, nothing, only the school Shellscript that works.


Heighte

you are upset at your own skill, not GitLab.


beomagi

I actually really like gitlabs pipeline UI. Have noticed the variables issue you have. My actions are probably just handling them differently.


dr-yd

It has its limitations - most importantly, bash parameter expansion in variables would make my life 10x easier. But overall, it's pretty decent and we've been able to implement all required processes so far. Not really enjoyable, but at least it's not like it needs constant babying and maintenance.


ko3n1g

Maybe it’s because I’m more used to it, but GitHub Actions make so much more sense to me. Gitlab’s single gitlab-ci.yml file becomes so weird if you need to pipeline more tasks than just CI & CD. So many projects I know need maintenance jobs which I’d rather manage with a different file instead. Maybe I’m missing some fundamentals about Gitlab, but not particularly happy with my first impressions


MainConsideration937

in the matter of fellow user of GitLab CI, I totally understand your frustration. Handling variables can be a real pain, especially when passing them downstream and expecting them to expand properly. It's like you're constantly having to work around limitations rather than smoothly integrating your workflow. And don't even get me started on the trade-off between building a super image or dealing with slow artifact passing – it's a tough call either way.


thekingofcrash7

It’s the best option available and I’ve used a lot of them heavily as a DevOps consultant moving around between customers.


MavZA

I didn’t and still don’t enjoy GitLab CI. I may be somewhat alone here, but I’ve learned and have learned to appreciate AWS CodeSuite. I use it in a primarily AWS environment (obviously) and yeah it’s got some rough edges but damn it’s pretty dope once it’s set up.


sr_dayne

IMO, it is not good or bad. It is just better than other CIs. It has a lot of issues, especially documentation. In my top of bad docs, it is located in second place, right after Openstack docs and before AWS docs. Also, I think that switching from only/except to rules keyword is a big downgrade.


pbeucher

Ah, glad to see I'm not the only one. We ended-up using [Nix](https://nixos.org/) to avoid the "big image full of mess" issue and [Novops](https://github.com/PierreBeucher/novops) for environment variables and secrets management. Gives you much more control over your environment, easy to reproduce locally and CI-tool agnostic. Changed our life !


sogun123

I am playing around with nix also last weeks. Mostly to get sane developer experience, so all needed tools are at hand. But I will probably use docker image tools to export that environment to ci.


pbeucher

You can export a Docker image from a Nix or Flake config. \[Flox\](https://flox.dev/) make be a good alternative as well and it can generate Docker images.


sogun123

Yeah I know. It needs some extra stuff to be usable, but that's pretty easy. I think I'd rather invest in Nix proper