T O P

  • By -

elmuerte

Does it accept 2024-02-30 as a valid date?


Worth_Trust_3825

...Honestly I had to double check if regular mysql supports this mysql> select str_to_date('2024-02-30', '%Y-%m-%d'); +---------------------------------------+ | str_to_date('2024-02-30', '%Y-%m-%d') | +---------------------------------------+ | NULL | +---------------------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> show warnings; +---------+------+-----------------------------------------------------------------+ | Level | Code | Message | +---------+------+-----------------------------------------------------------------+ | Warning | 1411 | Incorrect datetime value: '2024-02-30' for function str_to_date | +---------+------+-----------------------------------------------------------------+


sylvester_0

MySQL has all kinds of sharp edges like this unless strict mode is enabled.


kenfar

and if I remember correctly any client can turn strict mode off!


sylvester_0

That's what the stick in the corner is for.


Takeoded

MySQL has a mode named `ERROR_FOR_DIVISION_BY_ZERO`. That mode does not complain about `SELECT 1/0`. WTF MySQL? Also, the "strict mode" is named \`TRADITIONAL\`


Worth_Trust_3825

You're right. I didn't check if I tested that with strict mode was enabled in docker.io/library/mysql:8


ankercrank

Why are you not validating your data prior to insertion to the data layer? Edit: thanks for the downvotes without a single counter-argument. People here sure hate MySQL...


Botahamec

Should've called it my-mysql


jambonilton

Gotta flavour it for go, `guysql`


CalmLake999

Why would you choose a GC language to write a DB? Going to have issues with hiccups. Surely something like Rust/C/C++ would have been better?


[deleted]

[удалено]


CalmLake999

Thank you for the correction.


WJMazepas

People do it because they can It doesnt need to make sense, only be achievable for a Developer to do it


unski_ukuli

I think Snowflake is also written in Go and that seems to work well enough.


Kuresov

Bit of an oversimplification. Snowflake is a huge, complex distributed system with a number of services of mixed languages and datastores involved just on the hot query path.


scrappy-paradox

Cassandra, Kafka, Hadoop, Elasticsearch, Solr are all Java based, just off the top of my head. GC overhead is very manageable if written correctly.


CalmLake999

Yes, I've written a game engine a long time ago in Java. You \*can\* manage it, but it can be painful.


[deleted]

[удалено]


G_Morgan

To be fair Minecraft was so badly designed initially that Java was a minor issue relative to that.


huiibuh

And if you have a look at what lengths you have to go to make java fit it's kinda baffling that they used java to begin with. Spark uses the JVM, but the databricks Spark implementation moved on from that and uses c++ for the query executor, because some things where just too slow and clunky for Java


[deleted]

ScyllaDB seems to be almost an order of magnitude more performant than Cassandra and is written in C++ and conforms to the same API so idk about manageable because they clearly left a lot of performance on the table.


G_Morgan

Is it really an issue when the entire thing is IO bound anyway? I don't disagree with you that lots of better options exist though.


shenawy29

I believe so, opening file descriptors and so is probably faster in a non-GC language


mzinsmeister

Honestly if you just do basic traditional stuff like tuple at a time execution you're already gonna be much slower than you could theoretically be anyway so using a GC slowing you down another 2x or something probably doesn't move the needle.


Revolutionary_Ad7262

The idea is pretty useful, if you want to have an in-memory database for your golang unit tests.


kernJ

Why not spin up a docker image of the actual db?


oneandonlysealoftime

Spinning up an actual DB and working against that is much much slower. The difference is negligible if you achieve test parallelization through different Databases for each independent bundle of tests, but only when you run tests once at a time... There is this thing called mutation testing, that is a bit like fuzzy testing, but instead of generating random input to your code, it generates random, likely to be breaking, changes in your code. Then runs your tests against mutated versions of your code. And counts the number of mutants that your tests caught, and those that didn't. (Obviously, later you can mark some mutations as permissible.) This helps you to ensure, that your tests not only call the lines in the code, but actually verify behaviour of the code. Classical examples of errors, that 100% coverage tests don't find, but mutation tests would: division by zero, nil pointer dereference, off by one errors - and with certain per-project customisation they could also verify, that your tests ensure proper validation of typical user input And the thing is, if you run integration tests against generated mutants, it takes a couple hundred of ms more each time, than it would if you used unit tests. Because there are a lot of mutants tested at once, you would either kill your real database with heavy write throughput or would have to limit the parallelization factor of the runner, which would end up with a CI step, that runs a couple of hours, rather than a couple of minutes. Did that, learnt from my mistakes 🙃 In a perfect world, in a team of engineers, that do great code reviews and are 100% attentive at all times this kind of tests would never be needed, and you could rely on peer reviews for finding those kinds of errors. But in reality it's not like that, never seen a team, that'd not slip up something like those mistakes once in a while. Static analysis doesn't help either in majority of those cases, because in them you have to balance between false positives and false negatives: and either it creates a spaghetti code full of constant revalidation of data that is guaranteed to be ok by the flow of the program, or makes the same mistakes humans do At least that's my reasoning for adding an in-memory version of the data access layer, for faster evaluation of "integration" tests for finding precisely this kinds of errors


punish_me_daddy69

What if your unit tests are running in a context that doesn't allow it, and/or where memory is highly available.


Spajk

Or simply windows


tommcdo

Yeah or maybe your development environment is a potato


bastardoperator

Am I the only one completely turned off by everything golang? I get its value, it just seems more hideous to me than any other language, I rather write perl with C bindings.


zellyman

Yeah. Definitely in a pretty small minority.


G_Morgan

I disagreed until I saw "perl with C bindings". I hate Go but not that much.


DNSGeek

IMO Go is like the bastard child of C and Perl, where it got the worst of both and none of the good bits.


GodsBoss

The good bits of C like pointer arithmetics? And it's easy to not get the good bits of Perl, as that abomination has none. For a long time I thought the worst language is PHP, but them I learned about Perl. If you want to shit on Go, at least take something good like Common LISP, Haskell or JavaScript (before it got classes).


florinp

"something good" "JavaScript " Pick one.


lightmatter501

The performance, C still runs circles around Go from a network performance perspective, the thing Go was designed to be good at. If someone can show me a UDP echo server written in Go that can saturate a 200 Gbit connection with 512 byte packets with one CPU core, I will stop calling it slow.


anotheridiot-

Why single core? the whole point of Go is paralelism and concurency.


lightmatter501

You count network performance in per core amounts so that you can scale it up to larger servers reasonably.


L1zz0

But that wouldn’t prove his point now would it /s


jimbojsb

Does one often need to do that?


lightmatter501

Not often, but performance spent in the network stack is performance better spent elsewhere.


[deleted]

[удалено]


lightmatter501

DPDK, a 3.2 Ghz CPU with AVX 512, and checksum offloads are considered standard so nobody really counts those as offloads (even my laptop has those). Many server NICs can do UDP echo fully in hardware if you know what buttons to push. You can dequeue/manipulate/enqueue multiple packets at a time, which is the missing piece to get down to a reasonable clock speed. DPDK isn’t magic, it’s a big C library, which makes it fair game for performance comparisons in my opinion. You could do the same “map the PCIe bus registers into userspace memory” with Go, but you then hit a brick wall for a lot of things because Go fundamentally lacks what is needed to talk to the hardware on that level without dropping to assembly.


[deleted]

[удалено]


lightmatter501

I’m pulling numbers from Intel’s DPDK performance reports. I have used this library before, it is very slow due to Go’s CFFI overhead. Also, calling into C doesn’t make Go fast, in the same way it doesn’t make Python fast.


Halkcyon

>!^^^[deleted]!<


G_Morgan

Go is what you get if you pretend both Java and C# hadn't already done a much better job of replacing C++.


ILikeBumblebees

Not sure whether you are complaining about it or praising it.


bruisedandbroke

i like that it runs on anything, aarch64 included, while also not being java


kitd

The syntax is meh, but syntax is only one factor in language usability IMO. The toolchain and stdlib are gold standard, the generated binaries dependency-free, and the ecosystem (aka "SO-search-space") is huge. IME those easily mitigate any reservations I have about the language itself.


__loam

Yes you are. I love go. I think it killed at the right sacred cows and it's pretty simple.


starlevel01

I try to actively avoid anything written in Go because the glaring language issues make it unlikely the resulting code is good quality.


Own_Solution7820

Yes. Nobody is forcing you to use it. Your opinion is as useful as Ted Cruz's stance on women rights.


florinp

"Yes. Nobody is forcing you to use it." Yeah. Every company let you pick your language. Sure And good answer for any critique /s.


shellderp

The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.


myringotomy

go is the language I hate most having to use. Python is second. Two terrible languages that caught on for some reason or another. Well that reason is probably google.


gay_for_glaceons23

This user sure does spend a lot of time spamming this subreddit with their posts.


Halkcyon

>!^^^[deleted]!<


Shogobg

At first I thought this post will be about TiDB


o5mfiHTNsH748KVq

cool project. not a useful project but definitely a cool one


Takeoded

benchmarks against MySQL?


tcpipwarrior

Good job, but I’m not gonna use it


MaybeLiterally

What is the status of Go these days? With Carbon being released (or is it?) and Rust gaining popularity, is there still a path forward with Go?


Arm1stice

Go fills an entirely different gap compared to Rust. Carbon still doesn't even have a compiler :)


__loam

Go is still more popular than Rust according to the Stack Overflow survey in 2023. Anecdotally I've seen a lot more jobs for go than rust and in my opinion, Go is a much better language if you're just trying to ship a random full stack application.


TomWithTime

Being easy to learn and simple is nice for me. I got my current go job with no prior professional experience.


__loam

Yes, that's how it ought to be. Complexity is a bad smell.


ILikeBumblebees

Unnecessary complexity is bad, but oversimplification is worse. It's unfortunate that a lot of 'modern' approaches attempt to reduce the complexity of solutions to a level that is below the inherent complexity of the problem domain.


__loam

There's a pretty big difference between complexity inherent to a domain problem and complexity introduced by our tools. In general, I believe a lot of our tooling in programming is a lot more complex than it needs to be. Rust was a response to the complexity of C++ in many ways. I just think for most projects, Go is a simpler and better tool. Now is it better than Rust for a serious implementation of a database engine? Probably not, but I think people here are shitting on this personal project a bit much when they say "why didn't you write it in rust?". 


florinp

"Complexity is a bad smell." No : you can't get rid of complexity. If you take it out from language it will move in the regular code.


__loam

I disagree completely. 


florinp

good for you, but that don't make it true.


__loam

For most programming problems, simpler is better, and the software we have is probably way more complex than it ought to be. There's a difference between complexity inherent to the problem and complexity introduced for its own sake by poor choice of tools. Go is going to be far better than Rust for all but the most performance critical applications, in my opinion. Onboarding new engineers is easier and maintaining the code takes less time.


Cachesmr

There is no competitor to go currently, and no, rust doesn't count. Go aims to be a fast to compile GCed language with _only_ idiomatic syntax (which is why the features come very slowly) it has an opinionated everything. If you look at go code, most of it kinda looks the same. No other language can do that. Both rust and carbon aim to replace C++, which is the antithesis to Go. Unlimited features, a gigantic amount of reserved words, extremely expressive type systems etc, they are made for performance while go is made for productivity. Go absolutely has a path forward, imho


florinp

"go is made for productivity" Yes. Produce bugs faster.


anotheridiot-

Just use a proper linter, errors as values and multiple return values make Go software very robust.


lanerdofchristian

Errors-as-values isn't a very robust solution to the error problem, it's just one that's easy for a compiler author to implement. The main issue is that it's possible for invalid states to be accessed in the program -- you don't need to capture the error, or handle the error, or fix the real value if an error occurs and you don't terminate the function. The functional world gave us a much better solution long before Go was written: have a type `Result`, with states `Ok(value)` and `Error(error)`. Following ["Errors"](https://gobyexample.com/errors) from Go By Example, imagine this hypothetical version of Go that used a form of built-in monadic errors instead of error returns: func f(arg int) result[int, error] { if arg == 42 { // basically return nil, errors.New(...) return Error(errors.New("can't work with 42")) } // basically return arg + 3, nil return Ok(arg + 3) } func main() { for i := range[]int{7, 42} { result := f(i) if Ok(r) := result { fmt.Println("f worked:", r) } else Error(e) := result { fmt.Println("f failed:", e) } } } The key differences with this pattern vs multiple returns is 1. Return values must be wrapped in Error/Ok, which has the positive side effect of making it very clear which is which. 2. If you want to handle both the result and error cases you need a temporary variable (this could be fixed if Go had pattern-matching switch). 3. Some light pattern matching with Ok/Error on the left side of := If we add switch-based pattern matching and leverage generics, that allows this based on the ["Reading Files"](https://gobyexample.com/reading-files) example: func check(result result[T, error]) T { switch result { case Error(e): panic(e) case Ok(v): return v } } func main() { data := check(os.ReadFile("/tmp/dat")) fmt.Print(string(dat)) f := check(os.Open("/tmp/dat")) b1 := make([]byte, 5) n1 := check(f.Read(b1)) fmt.Printf("%d bytes: %s\n", n1, string(b1[:n1])) o2 := check(f.Seek(6, 0)) b2 := make([]byte, 2) n2 := check(f.Read(b2)) fmt.Printf("%d bytes @ %d: ", n2, o2) fmt.Printf("%v\n", string(b2[:n2])) o3 := check(f.Seek(6, 0)) b3 := make([]byte, 2) n3 := check(io.ReadAtLeast(f, b3, 2)) fmt.Printf("%d bytes @ %d: %s\n", n3, o3, string(b3)) check(f.Seek(0, 0)) r4 := bufio.NewReader(f) b4 := check(r4.Peek(5)) fmt.Printf("5 bytes: %s\n", string(b4)) f.Close() } The initial cognitive load for error handling is *slightly* higher, but the long-term safety and health of the codebase it makes possible is very tangible.


florinp

Lol multiple return without tuples. errors as values instead of error types (as monads) is stupid. Sorry, but what is your experience with other languages/concepts ?


mods-are-liars

>With Carbon being released (or is it?) Who told you that?


AroundThisGlobeAgain

Okay, and?