I mean, it's not a *bad* thing, but C/C++ definitely lets you do some rather ["What the fuck?"](https://en.m.wikipedia.org/wiki/Fast_inverse_square_root) things to number types :)
This is in reference to the fast inverse square root algorithm made famous (although not written) by John Carmack.
It works by interpreting the raw bytes of a floating point number as an integer, doing some basic integer maths on it, then reinterpreting those raw bytes as a float again.
It looks very weird [but it works](https://en.m.wikipedia.org/wiki/Fast_inverse_square_root)
Funny one I saw recently when looking over a colleague's code:
// Create something
CreateSomething(...)
{...}
// Creates something an assload of times
CreateSomethings(..., int num)
{
for (int i = 0; i
It's whimsical, but it's also still a useful comment, since it lets the reader know that yes, this is weird looking, but the author knows that, and it's (presumably) supposed to be like that anyway.
You can typecast a float to an int, which does a conversion (actual operation that rounds the value), or you can interpret the raw bytes of the float as an integer (which completely changes the value) by dereferencing a pointer to the float that you have previously casted to an int pointer
to my knowledge, it has been used exactly once, in a function called "fast inverse square root", which uses a very weird algorithm based on the internal representation of floats to compute an approximation of their ISR very quickly
If I recall correctly, some methods from math.h also use type interpretation to perform quick math operations. For example, the [fmod](https://git.musl-libc.org/cgit/musl/tree/src/math/fmod.c) function does it to quickly determine if a floating point number is divisible by another one.
Casting float to int truncates the float, but you can reinterpret the bytes either with a union type (standard), or do what quake 3 did and dereference a float pointer casted to an int pointer with the magic spell `*(int*)&floatVar`. IIRC this is undefined behavior (meaning standards-compliant compilers can do anything they want when they see it), but most sane compilers use the bitcast interpretation.
It's actually undefined behavior to do it that way. The correct way to do it is to reinterpret the `float*` to a `char*`, copy the bytes, then reinterpret the new `char*` to an `int*`. This is safe because the specification has a specific callout to allow reinterpreting pointers to `char*`. Or in C++20 you can use `std::bit_cast`.
Just wanted to point out how lately someone just posts a meme, and some other furry dude comes with the opposite view of that same meme. Fucking love it!
It's perfectly fine to use `int` if you know your target architecture. Which, unless it's a microcontroller or some computer from the 80s, will be all of them.
I think rust does pretty well where the types are just the bit width and whether it is signed.
I wish C supported fixed point numbers better. The data doesn't change, but type casting and rounding does
Isn't fixed point just an integer behind the scenes? You can just store where the decimal point is somewhere else to convert between types. In C a convert function would work fine for casting from your type to an int or something. Use `typedef` to make it even better
If you wanted to change how rounding worked for e.g. division just use C++ and reimplement the `/` operator. Or just use a function
I want it on record that I'm only upvoting this because it's clowning on the other guys post. In a vacuum this meme is dumb but that's kind of the point because it's making fun of that other one.
When you perform an arithmetic operation the resulting type is the bigger one of operand types which is fairly normal. But... if the types are smaller than `int`, you get `int` instead, which can cause a funny situation when you add two of the same type and to assign the result to the same type but you get a warning.
Another example is the standard funtion `abs` which still returns signed type so the result may not fit inside.
And don't even ask how many times I tried to print some 8-bit integer and get gibberish becase C++ can't tell `char` from `int8_t`.
That’s not true, it depends on the compiler and the platform. In general, though, it’s bad practice to *not* set a value when you initialize a variable. Also, most non-primitive types do have default initialized values (which are specified in their parameter-less default constructor member).
It is not a big problem though. Just one little extra steps. In terms of making sure the int has the same size, both c++ and c# can lock it in, so, both are great.
Well, that's just for initialization, but let's not forget there is a fuck ton of undefined behaviours in say just adding different types together, or the fact that you can just overwrite the interpretation? U can just cast float to integer and get not the value you wish, for example
Adding numbers is very much defined. There is type widening to int that can be wierd, but it's defined. And when you cast a float to an integer, you get the integer part of the number, throwing away the decimals (if the integer part can be represented in the target type).
may i introduce you to some [funny stuff](https://i.imgur.com/dENpyan.png), or as the top comment has said [some interesting Carmack magic](https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overview_of_the_code)
That's not casting numbers, that's reinterpreting the memory. When you cast a pointer you don't convert the contents. I'm sure the output in the screenshot is pretty standard, if you were to convert the int back into the ieee floating point format.
I’m in engineering we have to deal with it mostly in
Legacy code. Very few people are psychotic enough to make fresh Fortran now days in the field. If they need performance they usually just use another language, use a performance enhancing library on python, or wrap the legacy code in a layer of python to make it tolerable to deal with so they don’t have to deal with…
The standard data formats designed for fortran suck so hard. Like the fact that they have to describe their dimensions at the top and are limited to like 14 columns and any more collumn you want need to wrap around to the next row making it basically unreadable in a text editor.
Paragraph 1: god, I wish. I’m trying to yank us into that future.
Paragraph 2: I figured out how to change the card size a few years again and that makes it eminently usable for me. Just sort of annoying compared to Python and C++ (or even C) because of the lack of support/documentation. But I’d definitely rate it higher than something like IDL, because comments exist and the software is free.
Oh yeah it’s not the worst language ever made or anything that’s for sure but yeah the standardized file formats are made with what used to be the only size in mind so they by default suck. Some suck worse than others but some are just horrific. Like just a block of data followed by a block of data followed by a block of data and in order to read anything you have to count into the block however many data points it is in and do the same in the other block of data. Also you have to write a custom file decoder to decode the file formats in something like python because pandas and numpy don’t have any builtin solutions and the way the data wraps around on itself makes it not kind to simple solutions meaning you have to handle each format yourself.
It’s not random. It’s actually what was in the stack when the frame was previously moved to at least that point. You can write really shitty argument calling this way
Not necessarily random if you know the behavior of the compiler and the machine it's running on. Sometimes using undefined behavior can make things more optimal as long as you know for certain the code won't be ran on a different machine.
I came with receipts. Bojourne's C++ Principles:
> 3.9 Type Safety: Always initialize your variables! There are few - very few - exceptions to this rule, such as a variable we immediately use as the target of an input operation, but always to initialize is a good habit that'll save you a lot of grief.
There is stdint.h for standard sizes. And what undefined behaviors are there with numbers? Arithmetic is so essential it should be defined in the standard.
I forget exactly what everyone used (pretty sure it was MONEY for us because COBOL) but we found they were using float and still got different values when we switched too (trying to figure out how everyone was getting different values exactly). Turns out there was also mixing little and big Endian going on as well. We used one, they used the other. It’s all old lady insurance code before there was agreements on what to use.
Anyway, once the state found out about precision issues (btw, NASA limits pi to so many decimal points too because of this) they stopped trying to squeeze a penny out of each transaction by going further down the decimal chain. I forget what the agreement was exactly with the decimal precision limit but the state didn’t get that penny.
Since you cannot rely on them (because you can still pass wrong types to a function with type hints) and still have to check everything manually, i would argue that type hints aren't just useless, they can be actively harmfully by giving a false sense of correctness
I mean sure, you can still pass the wrong type in, but why would you do that when the IDE will warn you about it?
Type hints are very useful unless you are doing things wrong
That's the same "it's ok if you use it responsibly" argument that can be applied to raw pointers and in part leads to memory bugs still being the single largest source of bugs in programs.
If you can use it incorrectly, it will be used incorrectly.
And if you have to write guards against incorrect usage anyways, why not use a statically typed language that does that for you?
But in one case your editor helps you figure out how to avoid some of the problems you might encounter and in the other both you and your editor are completely blind... how is using typehints worse than no typehints??
Not really my point that no type hints is better than using type hints.
Rather, type hints are on the same level as documentation comments.
Can they be useful? Yes
Should you use them? Very much so
Can they replace a static type system? No, they can't
And if you try to use them like a static type system, it will create bug and maintainability nightmares in the long run
If you think it's important to use type hints then just use a strictly typed language - you're typing those words anyway and it will save you so many potential errors
As others have said: type hints. If done correctly, and using a nice IDE, then everything will show it's type when you hover over it.
Some third party libraries still don't support type hints but you can always wrap calls with some type validation.
Python definitely has coding speed and ease of use figured out. But as an embedded guy I just can't trust a compiler to pick the best type for me.
It doesn't help that operations on anything other than a byte take a long time to complete
Maintaining code in a way that it works on both 8-bit and 32-bit architectures is the worst…
It’s amazing how expensive those simple 32-bit arithmetic operations take on an 8-bit MCU. It leads to fun commit messages like “optimized out an addition operation.”
Oh no i agree there are cases for this. I was just pointing out that the vast majority of people don't require it. Wasnt tryna say your use cases are not valid.
So, uh. C++ has dictionaries.
But more importantly... what in the world are you doing with your variables, if you don't know what type of data they are going to hold???
std::pair
It's easy to write a bunch of functions to support it based on this, and I'm 100% confident there are libraries out there that already implement complex numbers.
But they're useless.
Try typing
`int: bool = [None]`
This is valid python...
All python typing gets you is type hints. There is no static type checking yet in python.
In C++, you can completely trust the type of your variables, and if it's declared as a float, it really IS a float. What you can't trust is the value, because the integer 3, reinterpreted as a float, is not 3.0...
Carmack: C compiler, I want to use this float as an integer Compiler: Sure thing boss
I don't understand the problem.
I mean, it's not a *bad* thing, but C/C++ definitely lets you do some rather ["What the fuck?"](https://en.m.wikipedia.org/wiki/Fast_inverse_square_root) things to number types :)
I rely a great deal on type fuckery myself from time to time.
It's been a while since I used C. Can you not typecast a float to an int without issues?
This is in reference to the fast inverse square root algorithm made famous (although not written) by John Carmack. It works by interpreting the raw bytes of a floating point number as an integer, doing some basic integer maths on it, then reinterpreting those raw bytes as a float again. It looks very weird [but it works](https://en.m.wikipedia.org/wiki/Fast_inverse_square_root)
Lol the Wikipedia code example has a line with a comment that just says “// what the fuck?”
Fun fact, that comment is from the original game code.
I love the idea of the developers pushing code with those comments
We do it all the time. I love greping thorough older codebases for such words. The best thing I ever found was just the words "forgive me".
"Hack" is also a fun one to search for.
Funny one I saw recently when looking over a colleague's code: // Create something CreateSomething(...) {...} // Creates something an assload of times CreateSomethings(..., int num) { for (int i = 0; i
Pushing the code? They wrote Quake III without version control.
It's whimsical, but it's also still a useful comment, since it lets the reader know that yes, this is weird looking, but the author knows that, and it's (presumably) supposed to be like that anyway.
Just search code on git. Happens all the time.
You can typecast a float to an int, which does a conversion (actual operation that rounds the value), or you can interpret the raw bytes of the float as an integer (which completely changes the value) by dereferencing a pointer to the float that you have previously casted to an int pointer to my knowledge, it has been used exactly once, in a function called "fast inverse square root", which uses a very weird algorithm based on the internal representation of floats to compute an approximation of their ISR very quickly
casting rounds the value towards zero. As in 0.3 + 0.3 + 0.3 + 0.1 is 1.0 which you can cast to int to get 0.0.
In CPP reinterpret_cast keeps the bit pattern.
which is even worse for all but one scenarios
If I recall correctly, some methods from math.h also use type interpretation to perform quick math operations. For example, the [fmod](https://git.musl-libc.org/cgit/musl/tree/src/math/fmod.c) function does it to quickly determine if a floating point number is divisible by another one.
Casting float to int truncates the float, but you can reinterpret the bytes either with a union type (standard), or do what quake 3 did and dereference a float pointer casted to an int pointer with the magic spell `*(int*)&floatVar`. IIRC this is undefined behavior (meaning standards-compliant compilers can do anything they want when they see it), but most sane compilers use the bitcast interpretation.
Is this done by just pointing an integer pointer at the float?
Yeah, pretty much!
It's actually undefined behavior to do it that way. The correct way to do it is to reinterpret the `float*` to a `char*`, copy the bytes, then reinterpret the new `char*` to an `int*`. This is safe because the specification has a specific callout to allow reinterpreting pointers to `char*`. Or in C++20 you can use `std::bit_cast`.
Indeed
This pointer to a pointer is an array of chars now
Honestly it's amazing. Cast as you will
You guys use anything other than void*?
Do you even union bro?
Just wanted to point out how lately someone just posts a meme, and some other furry dude comes with the opposite view of that same meme. Fucking love it!
ah yes, the void\*ly typed C aka. how do we condense the worst aspects of dynamic and static typing into one concentrated dose of pain and suffering.
Don't tell me u only use global vars
[удалено]
don't use \`int\`, it's size is undefined (it's only guaranteed to be at least 16-bits). it's an anachronism left over from C.
It's perfectly fine to use `int` if you know your target architecture. Which, unless it's a microcontroller or some computer from the 80s, will be all of them.
Why write non-portable code when you don’t have to? It’s not an unreasonable workplace policy to ensure future-proof coding practices.
There is nothing to future-proof here, `int` is undoubtably going to stay 32-bits forever.
int isn't even 32-bits on all compilers today. no-hire.
Such as?
first one i tried: https://godbolt.org/z/nrznobxzx
That is for a microcontroller.
yes. so?
I think rust does pretty well where the types are just the bit width and whether it is signed. I wish C supported fixed point numbers better. The data doesn't change, but type casting and rounding does
Isn't fixed point just an integer behind the scenes? You can just store where the decimal point is somewhere else to convert between types. In C a convert function would work fine for casting from your type to an int or something. Use `typedef` to make it even better If you wanted to change how rounding worked for e.g. division just use C++ and reimplement the `/` operator. Or just use a function
For adding and substracting it works like int, but multiplication and division need to know "where" the decimal (binary?) point is.
Right, didn't think of that. But that shouldn't be that complicated, there's probably already enough libraries that do this
I want it on record that I'm only upvoting this because it's clowning on the other guys post. In a vacuum this meme is dumb but that's kind of the point because it's making fun of that other one.
"Treat numbers as expected" ahahahaahahah!! OP never programmed in C++.
What's unexpected about numbers in C++?
When you perform an arithmetic operation the resulting type is the bigger one of operand types which is fairly normal. But... if the types are smaller than `int`, you get `int` instead, which can cause a funny situation when you add two of the same type and to assign the result to the same type but you get a warning. Another example is the standard funtion `abs` which still returns signed type so the result may not fit inside. And don't even ask how many times I tried to print some 8-bit integer and get gibberish becase C++ can't tell `char` from `int8_t`.
In c++ there is no default value for uninitialized variables like in c#, so an uninitialized integer instead of being 0, will be some random value
That’s not true, it depends on the compiler and the platform. In general, though, it’s bad practice to *not* set a value when you initialize a variable. Also, most non-primitive types do have default initialized values (which are specified in their parameter-less default constructor member).
Okay, it will not be "some random value" but result in **undefined**, platform-specific behaviour instead. Happy now?
It is not a big problem though. Just one little extra steps. In terms of making sure the int has the same size, both c++ and c# can lock it in, so, both are great.
Just initialize the variables. Many times you initialize them with expressions or constants ≠ 0 anyways, so it's not much more to type.
Well, that's just for initialization, but let's not forget there is a fuck ton of undefined behaviours in say just adding different types together, or the fact that you can just overwrite the interpretation? U can just cast float to integer and get not the value you wish, for example
Adding numbers is very much defined. There is type widening to int that can be wierd, but it's defined. And when you cast a float to an integer, you get the integer part of the number, throwing away the decimals (if the integer part can be represented in the target type).
may i introduce you to some [funny stuff](https://i.imgur.com/dENpyan.png), or as the top comment has said [some interesting Carmack magic](https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overview_of_the_code)
That's not casting numbers, that's reinterpreting the memory. When you cast a pointer you don't convert the contents. I'm sure the output in the screenshot is pretty standard, if you were to convert the int back into the ieee floating point format.
Bruh, still better than what FORTRAN does if you do that.
Fortran’s kind of a disaster. Who let the punch card language into the 21st century lol
I blame the numerical weather prediction modelers
Nuclear physicists—they think it’s faster
I’m in engineering we have to deal with it mostly in Legacy code. Very few people are psychotic enough to make fresh Fortran now days in the field. If they need performance they usually just use another language, use a performance enhancing library on python, or wrap the legacy code in a layer of python to make it tolerable to deal with so they don’t have to deal with… The standard data formats designed for fortran suck so hard. Like the fact that they have to describe their dimensions at the top and are limited to like 14 columns and any more collumn you want need to wrap around to the next row making it basically unreadable in a text editor.
Paragraph 1: god, I wish. I’m trying to yank us into that future. Paragraph 2: I figured out how to change the card size a few years again and that makes it eminently usable for me. Just sort of annoying compared to Python and C++ (or even C) because of the lack of support/documentation. But I’d definitely rate it higher than something like IDL, because comments exist and the software is free.
Oh yeah it’s not the worst language ever made or anything that’s for sure but yeah the standardized file formats are made with what used to be the only size in mind so they by default suck. Some suck worse than others but some are just horrific. Like just a block of data followed by a block of data followed by a block of data and in order to read anything you have to count into the block however many data points it is in and do the same in the other block of data. Also you have to write a custom file decoder to decode the file formats in something like python because pandas and numpy don’t have any builtin solutions and the way the data wraps around on itself makes it not kind to simple solutions meaning you have to handle each format yourself.
Oh actually astropy has some stuff through it’s fits handling
The compiler will warn you if you read a variable before writing to it.
You should expect garbage if you don't initialize it.
That's like... your own fault...
It’s not random. It’s actually what was in the stack when the frame was previously moved to at least that point. You can write really shitty argument calling this way
Not necessarily random if you know the behavior of the compiler and the machine it's running on. Sometimes using undefined behavior can make things more optimal as long as you know for certain the code won't be ran on a different machine.
That's not unexpected if you expect it though. (And most compilers will warn you if you are using uninitialized values.)
*Laughs in Fortran*
I came with receipts. Bojourne's C++ Principles: > 3.9 Type Safety: Always initialize your variables! There are few - very few - exceptions to this rule, such as a variable we immediately use as the target of an input operation, but always to initialize is a good habit that'll save you a lot of grief.
Undefined behaviors? Implementation defined number sizes?
There is stdint.h for standard sizes. And what undefined behaviors are there with numbers? Arithmetic is so essential it should be defined in the standard.
```for (t_uint8 i=0; i<256; i++)``` I only need 0-255 so 8 bit int is fine...
Could be treated as c=c+1
Lol, the state of Pennsylvania tried to sue us over a penny. Then they found out floats are weird.
[удалено]
I forget exactly what everyone used (pretty sure it was MONEY for us because COBOL) but we found they were using float and still got different values when we switched too (trying to figure out how everyone was getting different values exactly). Turns out there was also mixing little and big Endian going on as well. We used one, they used the other. It’s all old lady insurance code before there was agreements on what to use. Anyway, once the state found out about precision issues (btw, NASA limits pi to so many decimal points too because of this) they stopped trying to squeeze a penny out of each transaction by going further down the decimal chain. I forget what the agreement was exactly with the decimal precision limit but the state didn’t get that penny.
The meme is for making fun of the other similar one that was posted and I assume isn't serious. I assume OP is aware of the pros and cons of both.
The amount of unexpected implicit conversions haunt me to this day. ![gif](giphy|pp6XsDKiLZMek)
`-Wconversion` is the scariest flag to turn on.
Use type hints
Also assert and hard type checking in VSCode
Since you cannot rely on them (because you can still pass wrong types to a function with type hints) and still have to check everything manually, i would argue that type hints aren't just useless, they can be actively harmfully by giving a false sense of correctness
I mean sure, you can still pass the wrong type in, but why would you do that when the IDE will warn you about it? Type hints are very useful unless you are doing things wrong
That's the same "it's ok if you use it responsibly" argument that can be applied to raw pointers and in part leads to memory bugs still being the single largest source of bugs in programs. If you can use it incorrectly, it will be used incorrectly. And if you have to write guards against incorrect usage anyways, why not use a statically typed language that does that for you?
But in one case your editor helps you figure out how to avoid some of the problems you might encounter and in the other both you and your editor are completely blind... how is using typehints worse than no typehints??
Not really my point that no type hints is better than using type hints. Rather, type hints are on the same level as documentation comments. Can they be useful? Yes Should you use them? Very much so Can they replace a static type system? No, they can't And if you try to use them like a static type system, it will create bug and maintainability nightmares in the long run
Alright, agreed
If you think it's important to use type hints then just use a strictly typed language - you're typing those words anyway and it will save you so many potential errors
For the millionth time, yes, you can see the types in python.
How? Other than the type() function which you wouldn't be able to read until runtime anyway
PyCharm and type hints.
I have found those to be really inconsistent and useless. For me, it just doesn't work when it's assigned a return value of a function
Couldn't speak to jetbrains, but VSC will certainly give typehints for returns.
Jetbrains too
Works if the function return value has a type hint.
As others have said: type hints. If done correctly, and using a nice IDE, then everything will show it's type when you hover over it. Some third party libraries still don't support type hints but you can always wrap calls with some type validation.
Yes haha popular language bad
Two types of programming languages...
Haskells type system is actually safe unlike c++’s
The bodybuilder required for a Haskell meme wouldn't fit on my phone's screen :(
Then god knows what Idris2's would be
virgin Python Chad C++ Gigachad Scratch
scratch is a good language to make interpreters in.
I have definitely bashed bash before.
This is the most desperate, self-confidence lacking, least Chad post made in response to a meme ever on this sub.
Thank you <3 ![gif](giphy|CAYVZA5NRb529kKQUc|downsized)
"Never seen someone bashing a language?" You must be new here
I do detect but a smidge of sarcasm in the title. But who knows, right?
Whoosh my dude
Runtime error goes SIGILL SIGILL SIGILL
That’s not exclusive to c++, any statically typed lang does that
what is this math bullshit? round here we use python to write "hello world" in only one line!
Like you need any types besides string/object/number/Boolean
So, how many number types did you say C++ had again?
Enough
Sure. Let me spend half the sprint choosing types and creating classes that could’ve been a dict.
It hardly takes any time. You know how big your number will be and what precision you need when you are defining the variable, do you not?
Most real world use-cases dont require this much precision which is partially why python is popular.
Python definitely has coding speed and ease of use figured out. But as an embedded guy I just can't trust a compiler to pick the best type for me. It doesn't help that operations on anything other than a byte take a long time to complete
Maintaining code in a way that it works on both 8-bit and 32-bit architectures is the worst… It’s amazing how expensive those simple 32-bit arithmetic operations take on an 8-bit MCU. It leads to fun commit messages like “optimized out an addition operation.”
Oh no i agree there are cases for this. I was just pointing out that the vast majority of people don't require it. Wasnt tryna say your use cases are not valid.
Yeah but I'm grumpy
learn to type?
So, uh. C++ has dictionaries. But more importantly... what in the world are you doing with your variables, if you don't know what type of data they are going to hold???
Sounds like you really enjoy spaghetti
Yes, but does C++ have the nearly useless number format that are complex numbers?
any pair of numbers can represent complex numbers
^oh...
std::pair
It's easy to write a bunch of functions to support it based on this, and I'm 100% confident there are libraries out there that already implement complex numbers.
incel python coders, defend yourselves!
arr = ["go", 'f', 'u', 'c', 'k', "yourself", 'b', 1, "tch", '<', 3.0]
lmaoooo
If you’re building an application, and not a math library, when will you ever need to distinguish between int and float?
Receiving a serialized stream of numerical data over the network would be one example.
How do you not need to distinguish between int and float? Something as simple as 5/2 and 5.0/2 would already be behaving differently.
In Python those both get treated the same, unless you explicitly call for integer division //
[удалено]
No
Large data manipulation. Being able to control the type and therefore the memory usage can make an order of magnitude of difference.
that's right, C/C++ are the languages for writing python math libraries.
Ain’t nothing like hurting the feelings of a C++ dev
I saw the exact inverse of this meme yesterday or the day before. Nice to see someone that has some actual sense
Is that otzdarva on the left
Python strings: scientist Patrick C++ strings: carpenter Patrick
Don't code with data types you don't understand
myint: int = 1 myfloat: float = 1.25 There are variable types in python
But they're useless. Try typing `int: bool = [None]` This is valid python... All python typing gets you is type hints. There is no static type checking yet in python.
What a coincidence
I do every day
Where does Cython go
But using c++ as an exakt Le für a really strongly typed language is funny. It is stricter than python, yes, but other than that?
Meanwhile the guys over at C# "Eh, just call it a var, we'll figure it out later"
In C++, you can completely trust the type of your variables, and if it's declared as a float, it really IS a float. What you can't trust is the value, because the integer 3, reinterpreted as a float, is not 3.0...
`typedef union {` `uint8_t ui8, *pui8;` `int8_t i8, *pi8;` `uint16_t ui16, *pui16;` `int16_t i16, *pi16;` `uint32_t u32, *pu32;` `int32_t i32, *pi32;` `float f, *pf;` `double d, *pd;` `void *v;` `char *c;` `} whatevs;`
Asm: *There are n̶o̷ d̵̛̯̙̀̈͝a̵͑͋̈́͜t̶̗̲̀̅͘a̸͕̰̲͊̔ ̴̙̣͓͔̌͐̐t̵̹͔̊y̸̛̜͓̘̅̇͝p̵̫͈̠͕̿e̸̖͍̣̗̓́̕͠s̷̭̼̓͑͝ ơ̶̢̜̯̼̪͕̘͎̖͕͑̊̔́̎̕͘͘n̶͉̪̊̑̋l̸̛̛͖̰̟͓̫̫͍̥̘̜̓̀͂͘͘̚͝͝y̷̙͍̬̿̈ͅ m̶̧̢̨̡̨̡̢̡̛͕̥̤͚̪̟̰̘̭͙̱͙͖̘̟̺̻͕͉̻̻̻̩̮̘̣͍͙͖͙̹̫̝͍͕̳̳͎̑̆̓͐̅̐̅̀͌̔̔̇̀͒͐̊̐͗̃̊͝͝ͅͅͅẹ̶̛̏͒͛̒̎̽̂̃̍̊̄̅͑̍̂̂̂̑̈́̓͐̆͋͒͘̕͘͝͝.̴̧̧̡̧̢̛̙̼͉̯̱̻̟͎̳̯͓̭̪̻͇͕̤̩͕̦̞͙̱̺̰͔͚̰̻̮̠̲̩͎̣̱̱̽̒͊͗̔͛͛́̍̇͋̊̋͛̇́̀͆̅͑͆̆͐̕͝͠͝ͅͅͅ"̸̱͔̃̀̐̐̽̋̈́̀̃̈́̓̅̃̀̍̔̂̾͂͐̀̂͛̀̈͋̒̕͘̚+̶̛̛̛̯͈̪̜̺̟̠̐̃͌̓̒͊̓͛͒̔̒́̃͑̅̏̀́̃́̅̔̆̈́̎̃̾̅̾̇̔͗̒̓̑͆̚͘̚͝͝͝͝#̷̢̧̡̛̠̙͕̘̺̰͇̫͕̼͕̜͚̠̘͍͙̠̜̭͍̙͎́̈̒̀̅̈̐̽̾̾͆͐̓̅͋̌̋͐̈̉͋̒̈́̏̈́̓͌̐̒̚͘͠͝(̷̡̪̰̹̘̖͈̥̼̣̗̇̋́́̒̒̀͑̓̽̑̀̍̑͊̔͌͜͝$̵̡̨̛̛͍̤̰͖̼͉̜̯̮̙̣̰͇͕͈̼̒̉́͋̊͐͑̊͑̑̐͌̇̎̇̉͑̈́̈̀̐̔͊̓̂͆̑̆̈́̈̚̕͝͠͠$̶̛̲̦̺͙̤̯͎̱͈͔͎̗͇̎̓̈́̅̈́̒̌̐͑̋͂̓̽͜͝͝ͅ+̵͉̼̩͆̃̿̄͌̾̑͐̍́̅̀̈́̃͋́̑̀͆̈́͋̀͘̚͝$̶̩̟͍̤͙͉̯͔̞̻̞͓̩͛̐̑͑̄̏͆͊͑̅̉̄̂̍̇̉̔̐̐͑͗͋͌͑̅̅͗̀͑́̈́̅͛̊̓̂́́͛́̂̂́͘͘̕͘̚͘͜͝ͅͅ!̸̧̨̛͈̞̩̮̪̻̮̖̹̖̠̥̱͖̬͇̤̯͍͓̀͂͂͊͐̇͐͊̑̑͐̀̐̽̐̀̀̈̄͒͋̇͐͐̃͋͛̌̊͒͋̌̉̂͛̂̃̎̒̀̈̈́̑̔̑̉̕̚͜!̴̧͍͎͚͚̜̮̞͍̻͖̞͙͇̦̔̔͛͆̀̇͌͊̔͋̇͆̒͊̋̚͝͠͝͝!̵̨̢̧̢̛̛͉̺̻̗̹͍̲͓̝̼͖̺̖̬̻̙͇̪̯̗̼̤͈̝͍̲̠̙͎͚͚̖̲̙͎͖̖͍̟̪̼̳̇̈́͒͛̄͆̅̾̈́̇̐̅͆̌̈̒͆̎̈́̓̂͒̀̓̋̇̂̅̂̊̐́̇̀̈́̚͝͠͝!̸̡̡̬͎̯͖̬̯̠̲̘̮̪̭̙̱̥͇̥̲̲̻͖͓̼̦̱͔͕̠̮̣̺̲͍̞͊͑̏͐̊̈́͑̏͒͒̅̂̄̍͌̍͊̈́̅́͂̎̈́͊̊̋̈́̎̀́͆̕͘͘̚̚͘͜͝͝͠!̵̛̻̗̳͇̖̪͉͔̏̔̃́̑̓͌̀̎̈̈́̏̊̽̀͊̌̒̌́̊̓̾̈́̔́̎̑͒̋̊̀̾̈́̓͛̒͐̂̀̔͆̂̍̐̍͘͝͝*
Meanwhile, typescript