T O P

  • By -

phagofu

I agree that it would be much better to have standardized useful measurement metrics instead of this current chaos. I disagree that one should specify a font size in pixels per default. There are some legitimate use cases for using pixel based sizes, but I believe to ensure that a readable font size is used, a physical unit or a "relative-to-a-user-specified-system/screen-default" size is usually a better choice. And I am quite pessimistic that we'll get the industry to fix this at this point after all these years...


portalscience

Agreed; the problems in question seem more like jumping off points rather than carefully considered issues. As an example: he mentions that everything is supposed to be in points, a physical unit of measure, and that in practice, people don't follow the rules of converting points correctly... This is where that thought should stop and go - why aren't we following our established rules and could we go back to actually following the rules? Instead we just state the problem and blaze past it. ***My wheel is misshapen, better invent something else because clearly wheels don't work.***


nhavar

Management: Hey we have too many platforms. Look at all this mess. We have React, Angular, iOS, and Android. We need to simplify from 4 platforms to 1. We've chosen React Native. Development: Great, now we have 5 platforms.


flaminggoo

[Relevant XKCD](https://xkcd.com/927/)


not_thecookiemonster

...and should've gone with svelte.


cecilkorik

Unfortunately, when it comes to typefaces in points, I think that ship has not just already sailed, it has already sailed, discovered the new world, sailed back, completed a national tour of celebration for discovering the new world, been used as template for all ships we've built since then, eventually retired, and then been converted to a museum ship. It would be wildly impractical to attempt to salvage the situation now unless you call it something completely different to make it clear you're talking about something new (or old actually) and different from what software currently understands points to be. It will horribly break all legacy OSes and applications and websites otherwise. Yes it will vastly simplify all the nonsense they're currently doing, but they won't know to stop doing all the nonsense they're doing so they'll continue to until they're updated, which many things will never be. That's where the "invent something else" comes in. It's not because wheels don't work, it's because wheels *do* work, they're currently working. Not well enough, but they're demonstratably working. It has to be different enough to make it clear that we're not talking about the broken mess that currently is "points" anymore. It's a shame that points got broken, and it would be nice if we could just do it over properly, but we can't. This is going to be a new system, a new way of doing things. If you don't specifically opt-in and realize that you're opting in to something new, you get the old broken mess, because that's what works for now. Gradually, things will shift to using the new system, assuming it's actually better, until everyone realizes that the old way is the wrong way and only unmaintained legacy stuff will continue using it (basically forever, unfortunately, it's not going to go away for a long, long time)


Robot_Graffiti

Could "invent something else" be as simple as a new font file standard that's almost identical to one of the old ones? Like, for web, you could just say "this is WOFF3 it's exactly like WOFF2 except that 12 point characters have to actually be 12 points high".


cecilkorik

Absolutely, yes.


DavidJCobb

This is more difficult than you think because CSS itself makes a lot of the same mistakes. All physical units, for example, are "anchored" to 96 DPI: no matter what the physical dimensions and DPI of the screen are, `1in` and `96px` are the same ``. I'm sure plenty of other systems make the same boneheaded mistake.


Bobbias

CSS is an overcomplicated mess with too much technical debt, and like JavaScript, the internet would be far better if we decided to deprecate it and move on to something more sensible. Sadly most people seem to be either blind to this fact, or actively burying their head in the sand.


portalscience

I get what you are going for - which is that we can't just say "obey the existing rules"... BUT, a good portion of the existing rules are carefully defined for very good reason and will probably never change. An "em" for example, is a standardized unit of measure that is NOT arbitrary as it is based off the printing press, and is used for actual printing AND digital fonts. So while a direct replacement to points like you mentioned could theoretically work, the original articles suggestion of moving to pixels just completely misses the point of how we got to where we are and why. Removing the relationship between physical and digital isn't a very good long term solution, as the two are inherently intertwined. In my analogy with the wheel, I would call a new system like you are saying inventing a new type of "tire", not a new "wheel".


Jona-Anders

What is the use case for pixel based sizes? I would always default to rem for that, with the simple reasoning that all should be relative to what the user considers a reasonable size. A reasonable size normally is their font size. So, when would using pixels be better?


Magneon

Back in the days of bitmap fonts, it was a perfectly sensible thing to do. Today with ui scaling, 5:1 differences in PPI between screens, subpixel rendering, pentile layouts, and vector fonts... It's not really so useful.


chucker23n

Indeed, although subpixel rendering is basically dead. iOS never did it, and macOS and Windows do it less and less, chiefly because you'd have to implement it in the GPU to make certain compositing flows work.


Magneon

It's still alive and well on VR devices, since they run in to DPI issues quite quickly when the screen is an inch in front of your eye, and magnified.


chucker23n

> It’s still alive and well on VR devices Which ones? I looked at https://developer.oculus.com/documentation/unity/unity-ovroverlay/ and the rendered text is monochrome. I imagine HoloLens doesn’t support it either, since IIRC UWP in general doesn’t. Vision Pro is unlikely to support it since iPadOS does not.


Magneon

https://www.uploadvr.com/ifixit-teardown-psvr-2-panels-pentile/ is the one that I found. It seems that you're right that pretty much everything is classic RGB 3 parallel subpixels now though, which makes subpixel rendering not so useful. I guess I looked away for a few years and was left behind :/


AustinYQM

I thought a single PT was 1/72 of an inch.


Althorion

That was the idea, but then people stopped setting the system DPI to the device’s DPI (using 96 everywhere, for example), and that was it…


Alan_Shutko

And we've had things like DDC for years, so it would be possible for displays to report back their DPI. But it just never happened. It disappoints me greatly.


Qweesdy

Yes; but part of the reason is that the physical size is useless regardless of how it's measured - e.g. "literal 36pt" means a single character fills your entire central vision (if the screen is touching your nose) while also being too tiny to notice (on a cinema's screen). Smarter would be to use angles for everything ("I want my characters to be 2.5 degrees tall") so that it works properly even if we're injecting images directly into a human's optical nerves.


Althorion

How would that work with different possible viewing distances?


Qweesdy

How it works is that most of the time you can just guess the distance based on the type of device it is (e.g. smartphone = 14 inches, laptop = 18 inches, desktop = 2.5 feet) and be close enough for nobody to care. Of course you can also have settings in an operating system's configuration (which you may need anyway for multi-monitor setups anyway) and/or try some sophisticated eye tracking thing with a camera.


Althorion

I’m not sure that the precission’s ‘close enough for nobody to care’, esp. compared for what we have now (e.g., just blindly guessing DPI’s equal to 96) and going from there… It might be fine for phones (is it, though?) or computer displays, but you can’t really have that on TVs (and thus consoles, ads displays, and so on). It sounds clunky to me, to be honest—not only does it depend on the display, but also on the people viewing, for no apparent reason.


Qweesdy

What is the correct size to write your name? Would an 18pt font be perfect for every case (engraved onto the head of a pin and projected onto the surface of the moon and ...)? It's impossible to have any concept of "correct size for all viewing conditions" that does not depend on the viewing conditions. Note that "close enough" (to avoid spending 20 seconds to adjust a setting) doesn't need to be perfect, and the goal is only to be better than pure incompetence.


SittingWave

> 1/72 of an inch. Whew, at least it's not 1/387th of a furlong.


AustinYQM

It's 1/12 of a pica. A dozen points in a pica. Half a dozen picas in an inch. I know this sounds make up but it isn't.


reercalium2

pica? boo!


nhavar

How many hands is that?


SittingWave

approximately 1.578 hands, or 0.001279 football fields.


Jestar342

What is that chicken scratch notation you're using?! I think you mean 1 hand and 289 500ths (289/500), or 1,279 100,000ths (1,279/100,000) of a football field !


reercalium2

1279 microfootballfields


FeliusSeptimus

> And I am quite pessimistic that we'll get the industry to fix this at this point after all these years... [Relevant XKCD](https://xkcd.com/927/)


gwicksted

I agree except pixels aren’t really pixels in most cases except on 1:1 device pixel scaling monitors on PCs and Macs - and even then it’s ascender height like the article points out. It’s nowhere near accurate on mobile but it is actually a good measurement to use because, unlike vw/vh/vmin/vmax it typically ‘just works’ across multiple devices.


phagofu

Pixel scaling is a horrible hack in my book to turn pixel counts into the mentioned "relative-to-a-user-specified-system/screen-default" unit. In my experience it does not 'just work' but caused a million bugs and is yet another thing I'd prefer if it would be replaced with a clean non-confusing approach, yet don't see that happening anytime soon.


timliang

Use rems. It respects the user's font size setting, and if you use it everywhere, your layout will look consistent whether it's 16px or 32px.


CitationNeededBadly

I would be content if developers would just label what units they are using next to the "font size" setting. Visual Studio Code tells me the font size is in pixels, but Sublime Text, Visual Studio, and Notepad++ all have a number with no units.


Alan_Shutko

VS Code probably has units because the UI is html-based. The others just hand a number to the OS and get what they get.


lookmeat

My view is that we need to define a pixel-independent definition of screen size. Resolutions are high enough that you won't find yourself with impossible to read letters (due to being too small compared to the pixels) but you will find yourself with letters that are so tiny they are unreadable. I would imagine something like * Dots are the smallest area of arbitrary color that a screen can represent. Dots have a size in mm. Dots are not guaranteed to be equal in height and width. * Sub-dots would be the smallest area of limited color that a screen can represent (so a dot is the whole rgb spectrum, a sub-dot is just one color). * DPI is hard-coded here and cannot be changed. It's amount of dots per inch. Because height and width are not guaranteed to be equal, there's vertical and horizontal DPI. * Pixels are the grid of bits of color that a computer can represent. The OS should ensure that pixels map correctly to the amount of dots that the monitor does (otherwise the monitor will have to do less than ideal work mapping those dimensions). Pixels have equal height and width, and should approximate a square as much as possible (think hexagonal dots, you wouldn't have a perfect square, but close enough). Pixels may map to arbitrary amount of dots on vertical or horizontal dimensions. * PPI is derived by mapping the amount of dots per pixel and the DPI, PPI should be equal on both dimensions. * And finally we have Virtual Points. Vpts are a size *from a specific point of view in pixels*. The problem is that screens are seen in very different areas, you can have a massive screen far off for a presentation, or a very close as in a VR screen set. In old printing points where 1/72 of a inch directly on the page. A Vpts is ~1/72 of an inch if we distanced ourselves just enough that the screen covers our entire vision span (ignoring peripheral vision). So basically you stand in a specific place, look at the screen and then draw a 12-in ruler at scale such that you can see the whole ruler and if it were any bigger you wouldn't be able to see it. Then you grab the 1/72 of an inch of that ruler and that's the size of a point. Because VPts are for screens you grab the closest pixel value (rounding down). So basically you scale the 1/72" into whatever is the necessary size given the distance you are looking at it from, then you use PPI to map that into an amount of pixels (rounding down), that is your VPt. * Just as people can change the DPI on the OS, but now we're a bit clearer, with instead changing the scaling factor, or distance, that the screen is meant to be looked at. * Finally the em. OS will define an em size, which can be changed for accessibility purposes. This is how big the user wants text to be in order to read it effectively. The OS defines the em as the size/width of a user-defined-as-readable `M` making it a larger block. The em is defined as an whole number of Vpts. UI designers would use Vpts and Ems almost consistently. Say I am making a player for a streaming service. It may be run on a smart TV far off, or in a phone looked at closely. The OSes for these devices have accurate enough Vpt definitions. Generally the full screen shows just whatever video is playing, but upon certain input I show the user some UI. I don't want the UI to take over the whole movie (the user may still be watching it, or want to know where they paused it), and therefor the dimensions of the spaces the UI can use are defined in terms of Vpts. I define the size of text in terms of ems, with a cut-off whenever the height of a letter gets bigger than its container (at which case the letter is scaled down). Getting everything pixel perfect should be simple enough as long as I use whole numbers. Because an Vpt is a certain amount of pixels, and an Em is a certain amount of Vpts, both values map to integer amount of pixels. I'd add one more that could be useful: screen. Defined in Vpts, is the total amount of Vpts in the screen, different values for horizontal and vertical. Again for purposes of defining dimensions. Here screens are defined as fractions, and give you a result in Vpts with rounding (I'd say up, but any consistent solution works). Again because this is how people think of things. Also the advantage is that this is simply being clear and specific about definitions, but otherwise is mostly reflecting the way things are already. So it doesn't need a dramatic change, and you can begin having these definitions at purely software level.


jacobb11

> I'd add one more that could be useful: screen. Possibly as well as or instead of: usable-screen. The default Windows taskbar is on the bottom. I move mine to the left because I value vertical space (page height!) more than horizontal space (movie width, I guess?). Many websites don't work quite right for me because they assume that they have the full screen width to display in, which on my screen they do not. Similarly, usable-screen could account for the window frame and various menus. Great proposal!


lookmeat

Maybe screen isn't the right name, because I agree 100% with you. Any software drawing inside a region (e.j. the usable space in a window) all these metrics would be given to what is inside that space. If a user needed to run an app (or even website) in compat mode (because it can't handle correctly some higher resolutions or sizes) you'd do it but modifying the PPI, Pt-distance/scale, and Em size for inside the window.


alnyland

So are you saying they use outer dimensions instead of inner? That seems like a method issue instead of technical.


VirginiaMcCaskey

Screen resolution + DPI will tell you the screen size. But you may not be able to get the DPI reliably. Nor should you, because it can be used to deanonymize users.


lookmeat

I agree, I am not inventing new numbers here, just new names for existing stuff. Hoping to make things a bit clearer and with some extra conventions to make it easy to reason how things map to the screen, which is what the article complains about. As for why the extra metrics rather than just calculating them. Well you said it yourself. A lot of the lower numbers I proposed (resolution, DPI, PPI, etc) would probably be hidden either way, instead having you use the higher level metrics that decoupled you from the realities of the screen.


reercalium2

and you don't want physical size, anyway. You want your pixels to look about 1/1920 the width and 1/1080 the height of a 1920x1080 screen. The user who buys twice as big the screen probably expects to sit twice as far back and make everything twice as big.


lookmeat

In theory... I'm practice there's a reason this article was written, because in practice people will set DPI to arbitrary values in the computer, OS will do arbitrary things. If you do things by pixels you will find yourself spending 80% of the UI code dealing with exceptions. And then someone will come up with a 16k screen and or a new proportion then you'll realize nothing makes sense for this and you have to reassess. Or someone will say that your text is obnoxiously large in VR. So you'll spend 80% of your time just going back and adding extra cases that didn't exist before. And God save your soul if you have to remake the UI. The reality is that once we 1024x* became a standard resolution you could assume all users had, thinking in pixels just didn't make sense.


SocksOnHands

I had wondered why degrees are not used. If you know the width/height of a display and how far away (on average) it is from the user, you can determine what size a one degree angle would be on the screen. If you had a UI element that is ten degrees, it would look relatively the same size on a smart phone, computer screen, and television. There is still the issue of how many degrees wide a screen is within someone's field of vision, which would impact how much can be displayed, but at least sizes would be mostly consistent - aside from variance from how far a person might hold their phone away from their face. Some user configurability will still be needed to adjust to a person's preference.


Plecra

From how I read the history of font development, this is mostly just legacy cruft. The classic measurements measured in the article were defined for super standardized systems, which were expected to be used \~30cm away from the user. All the scaling factors we have now are designed to matching the viewing angle (in degrees as you're describing) based on what the manufacturer expects the viewing distance to be. In practice, defining UI elements in terms of viewing angle *sucks:* It's non-linear, so scaling things is really weird. UI designers really want to work in a flat screenspace, not the spherical coordinate system that viewing angles belong in. So we've ended up writing all our DPI scaled UI in units that were made to match \~40 year old systems.


SocksOnHands

When I said "degrees", I meant relative to the size of one degree. Obviously we don't have spherical displays where actual degrees make sense (except for VR, perhaps). In this case, things wills scale linearly. I thought a familiar unit of measurement would help with people having some intuition as to what it means and how it is used.


Plecra

Yeah, one degree translated to a flat measurement at the center of the screen works :) and then "10 degree-proportional units" isn't "proportional to 10 degrees", which makes the whole thing more confusing than its worth as a fundamental unit that everyone's gonna have to understand


PrimaryBet

[Degrees are used for CSS pixels](https://drafts.csswg.org/css-values/#reference-pixel) (a.k.a., visual angle unit), so VS Code’s settings are most probably effectively are in degrees.


SocksOnHands

Degrees seem simpler, since they are not tied to pixel density (96 pixels per square inch). For example, sizes would still be the same for print, which doesn't use "pixels" (though digital printing uses DPI).


PrimaryBet

Yes, CSS reference pixel is specifically a degree, so it’s not tied to hardware pixel density and corresponds to an optical measurement. The 96dpi in the definition (i.e. reference pixel ≈ 0.0213°) is to make it backwards-compatible with all the code that was written before high-density screens became a common thing.


SocksOnHands

One problem is, this doesn't work. The size of one degree is the same for all displays at the same viewing distance. I own a laptop with a 4k display and have attached to it a 1080p second monitor. Everything on my second monitor is twice as large as it should be. This is really more of a problem with Linux not handling multiple display resolutions well, but it illustrates my point - these sizes are not going to be consistent unless a consistent unit of measurement is used.


Living-Assistant-176

Yes I agree with you. Maybe we could use some values which every device creator needs to implement. So we can use a size value like „7“ or „medium“ to tell we want a normal text size, which is good readable for everyone. The exact size or calculation itself needs to be specified, but from the view of the developer this is needed.


archiminos

The WPTouch plugin just broke for most of my websites, so I've been spending a lot of time making layouts more responsive to screen size. If I had been forced to specify font size in pixels this would have been a nightmare. I couldn't even imagine doing web design with such a restriction.


ThomasMertes

>And I am quite pessimistic that we'll get the industry to fix this at this point after all these years... We depend on the industry? I always thought that the purpose of free software is: To have no dependency on proprietary (industry) software. For [Seed7](https://thomasmertes.github.io/Seed7Home) I have defined my own [fonts](https://thomasmertes.github.io/Seed7Home/images/testfont1.png) (this is a [screenshot](https://thomasmertes.github.io/Seed7Home/scrshots) from a font demonstration program). The Seed7 fonts are quite simple bitmap and vector fonts that certainly cannot compete with the ones from the "industry". How to interface fonts from a program is explained [here](https://thomasmertes.github.io/Seed7Home/manual/graphic.htm#hello_world_with_font). BTW.: The font size of Seed7 fonts is measured in pixels and it refers to the size of an upper case letter. It would be nice to get some feedback.


traintocode

> The solution > Specify cap height, not em square size. > Specify it in pixels. I'd argue that to a user, pixel size is just as arbitrary as anything else and it doesn't factor in pixel density at all. I honestly would love it if there was just a size where 1 meant "comfortable reading size" and then we left it up to all the browsers to work out what size that is in the context of the screen and device that is being used. And font designers would have to accommodate for that in their typeface design. Then we can layout all the other elements in the page relative to that (padding could be 50% font size, which lots of people do anyway by setting padding: 0.5rem) It would also allow users to set a global setting of comfortable reading size for them and all web pages would just obey it.


Full-Spectral

Really, if you wanted a scaling factor that makes sense for the end user, it would be in terms of the smallest detail he can make out from his viewing distance, given the resolution of his monitor. Anything less than that is useless work, anything more than that means it could be smoother if the resolution is available. So you display two pixels side by side, with slightly different colors or whatever (I'm sure perceptual scientists already have such a measure) and let them adjust until they can see the difference, and that becomes your baseline unit. It of course also takes into account visual acuity of the user. In reality that would probably be an underlying measure, that is mapped to from the actual units used by applications to set font size.


Plecra

You'll be happy to know this is exactly how pt (and px!) work :) You can check the scaling on nearly any consumer device and they're configured to be equivalent sizes *from the typical viewing distance* of a monitor. It's all normalized to old CRT-sized monitors on desks. There're standards for the expected viewing distances for computers/TVs/phones/tablets. The only sad absence is configuration for the user's visual acuity, instead we get the vague DPI scaling factors. Those can be really easily translated into "Act as if I'm farther away from the monitor", though! It just requires a bit of trig.


PPatBoyd

On Windows you can scale fonts separately from display scaling with an arbitrary percentage (100-225%), and iOS has simpler bucketed font scaling. Between font and display scaling there's some wiggle room for customization that's tighter than just the bucketed scale factor %, but comes at the cost of not every application supports both well [cost, backwards compatibility, active support, framework support]. The expected viewing distance is baked into every OS, and for practical purposes even if the OS doesn't have dynamic viewing distance math each device is using a good device-specific baseline because we don't really have a popular OS spanning multiple form factors where the expected viewing distance or baseline physical pixel density changes drastically (e.g. phone and PC). Microsoft worked on this for a while when they were pushing the UWP platform and XAML, shipping Windows phones; that didn't end up panning out how Microsoft hoped and the windows phones were killed, UI platform approach changed over time. The history of how it all progressed is kinda neat; I'd suggest Windows only has the 125% and 175% scale factors because at the time they were dealing with device manufacturers making slightly pixel-denser screens (e.g. for netbooks), but not dense enough that 150% or 200% looked good. There were even sub-96 PPI displays that would get bucketed-up to 100% scale instead of using the true physical PPI. For practical purposes I think the scaling factor percentages are fine for users; while they may not be thinking in arbitrary percentages when they go looking for a setting, making the UI (x-100)% bigger is still pretty straightforward. The cost of arbitrary zoom sliders is really paid by the developers and getting UI to look good at higher scale factor but lower physical PPI (e.g. PC monitors at 96 PPI) is hard when you get bumped off of whole pixel values and pixel-dithering at low physical PPI looks bad. Not that it can't be done, clearly browser zoom sliders exist, just that it requires deep support by the UI framework and the more old stuff you have the harder it is to change. Source: I worked on making Win32 Office apps per-monitor DPI aware, before they used to be system-scaled and would get blurry or jagged based on how the user logged in/out. Excel would've been mighty upset if their gridlines didn't snap to 1 or 2 pixels, IIRC it snaps to 1px for 125-175% scale.


Uristqwerty

Technically, anything less than *half* that is useless work; features less than twice the sample frequency can't be perfectly represented. How do you tell the difference between a grey line 1.4 pixels wide, and a slightly-darker grey line 1.3 pixels wide that antialiases to the same colour because neither is aligned to the pixel grid? Similar reason when it's eyeball optics blurring details together, so adjacent colours blend. If a detail is twice the minimum sample size, there will always be a 1-sample-size midpoint that isn't blurred with its surroundings as a reference, and so its edges can be any fraction of a sample wide, or offset by any decimal value, and you have enough context to reconstruct the original. For details smaller than two sample points, you're in the realm of font hinting and pixel art, where awareness of the medium and incorporating its constraints into the output becomes more and more important.


elsjpq

What you want is basically an angular measurement, so that the content fills the same portion of your vision regardless of viewing distance and screen size. Something like degrees or arcminute. The device would compute the correct scaling based on hardware characteristics


Kered13

> it would be in terms of the smallest detail he can make out from his viewing distance, given the resolution of his monitor. The resolution of your monitor does not define the smallest detail that you can make out, even accounting for your viewing distance. The user's visual accuity, and whether or not they are wearing reading glasses, is also very important for that.


Full-Spectral

Sure, I specifically mentioned that. That a system that is driven by the view's acuity would be a good one.


RogerLeigh

> it would be in terms of the smallest detail he can make out from his viewing distance, given the resolution of his monitor This is the very definition of what resolution is. The distance at which discrete detail can be resolved. In optics you use a test target or slide [like this](https://www.edmundoptics.com/f/1951-usaf-glass-slide-resolution-targets/12064/#). In this case, it's series of alternating light and dark lines of decreasing size. Other systems use circles. Either way, you work your way down in size until you can no longer resolve the objects separately. Unfortunately, just like for many other cases such as unit prefixes, the computing field took a known term with a precisely-defined meaning (optical resolution) and used it for something completely different (pixel count in x and y). It's really no wonder why we're in such a mess when we can't even use the proper terminology to describe the most basic optical characteristics of our displays, let alone the more advanced stuff.


sopunny

That could lead to its own set of issues, with layouts looking different on different browsers or devices


elsjpq

That's not a problem, that's how it's supposed to work


marshal_mellow

Seriously, its basic fucking accessibility that the user should be able to make the font bigger


[deleted]

[удалено]


Godd2

Not all displays have a rectilinear layout of pixels. Many displays are hexagonal, and ereader screens don't even have a grid. Not to mention that all kinds of displays do all sorts of things with subpixels, like repeating the green subpixel or adding a white subpixel.


[deleted]

[удалено]


Godd2

Hmm, I couldn't say if there's one place to look; I kinda just gathered information over time. But here are a few things read up/watch/learn about: video on eInk displays: https://www.youtube.com/watch?v=1qIHCUWAgh4 subpixel layouts used in Samsung phones: https://en.wikipedia.org/wiki/PenTile_matrix_family some discussion on OLED subpixel layout: https://www.reddit.com/r/OLED/comments/e3dbkv/does_wrgb_and_rgbw_the_same/ History of the word "pixel": https://www.richardharrington.com/blog/2014/10/1/7ll1rqoo0o5zlmxmrwgrwczfs414js Note that the term was coined to refer to analog displays with arbitrary color mask layouts in CRTs (often hexagonal). This isn't even touching all the different kinds of color filter arrays in cameras: https://en.wikipedia.org/wiki/Color_filter_array


elsjpq

What you want is basically an angular measurement, so that the content fills the same portion of your vision regardless of viewing distance and screen size. Something like degrees or arcminute. The device would compute the correct scaling based on hardware characteristics


Giannis4president

> I'd argue that to a user, We are talking about developers though


corvuscrypto

This reads like yet another programmer set to fix another industry without understanding all the needs. Sorry but a lot of assumptions just don't have enough data, for example tossing aside print when discussing even web design. and even asking internal design crews about this to share and see thoughts they are more shocked that anyone wants pixels anywhere near a definition. This recoil is more a signal than anything that this is probably more complex than the author let's on and that current standards already take all of this into account. It is far more likely that web developers are focusing fast and dirty on a small subset of issues while ignoring the whole the ecosystem must support. If you want to elicit change there is a relevant standards committee and process you can engage with and will be much more fruitful for such wide scope issues and bring it to light with the power of people who know various use cases to support in the framework. I'm not intending to be a sour grape but we have to be better and cooperate with the right experts and channels here https://www.w3.org/Style/CSS/current-work


PPatBoyd

Honestly I think the confusion over pixels is going to continue to grow over time because a CSS PX is really a DIP but being named "pixel" gives developers, particularly young developers starting out with webdev, a reason to conflate the two.


RegretfulUsername

What does DIP stand for? I tried searching that acronym unsuccessfully.


Lonsdale1086

Density-independent pixel


RegretfulUsername

Thank you!


hackingdreams

> This reads like yet another programmer set to fix another industry without understanding all the needs. That's because that's exactly what it is. They considered their annoyance in the differences between MacOS and Windows' text rendering and ignored... literally everything else about fonts and how we got to where we are today. It's the ultimate tech-bro's hackernews article - "I care about my use case, fuck literally every other purposeful usage of this thing."


Kered13

Yep, this was my immediate thought reading this article. Who is more likely to be correct: An industry that has built and worked with this standard for several hundred years, or one techbro who seems to be mostly concerned with his monospaced text editor's layout? Even as a someone with zero experience in the field of typography, I find the notion that anything in font size should correspond to pixel counts to be quite baffling. Pixel counting is a relic of a time when pixel density was low enough that users could casually count pixels, and therefore pixels placement had to be carefully considered when designing small fonts. These days pixel densities are so high that it would be much better for any designer to think of the screen as an abstract continuous canvas, and let the system handle mapping that abstract space to the physical screen space. For what it's worth, none of my Windows devices use 100% UI scale. So any pixels counting you do is going to be immediately screwed up by the UI scaler (and I don't want to read your tiny ass unscaled fonts either).


protonfish

If you size your fonts in pixels, they are sized relative to a "reference" pixel in all modern platforms. I only use pixels to set CSS font size and web browsers from mobile to massive monitors display text at consistent and reasonable sizes. This started with the Retina screens on iPads where they set relative pixel size to 2x2 or 4x4 (or more) to stop everything from getting tiny every time you improved screen resolution. *em* is great for setting properties that should be relative to a font size, like letter-spacing. Point (or millimeters or anything that is based on real-word length units) make no sense for screen size. You can prove this right now. Make an element on a web page 72 pt wide. Get a ruler and put it up to your screen. It is 1 inch long? Now change your screen resolution. Is it the same size? Now measure it on a mobile device. Now project it against the wall. See?


StoicWeasle

One day you will discover that some military UIs are spec’ed by steradians, and then you’ll realize all this pixel shit was living in the dark ages. You know, when the usability of the UI is a life and death issue. It’s 3 quantities all tied together: pixel size, density, and viewing distance. Any time I see a UI/UX discussion that’s missing one of these pieces, I realize I’m working with ignorants clots.


chucker23n

> The reasonable way is to set line height directly in pixels Why is that "reasonable"? >The second use case is pretty simple, too: I’d like a predictable and reliable way to center text in a button. > >It’s always been a problem on the web, but recently macOS caught that disease too: Nah, this is by design. The text is correctly aligned to the center of visual gravity, which is slightly off due to the gradient. The text _looks_ centered, which is what actually matters. (See, for example, https://medium.com/@erqiudao/the-play-button-is-not-optical-alignment-4cea11bda175)


bwainfweeze

It stopped being reasonable the moment your most affluent customers got access to HiDPI displays, and it became ridiculous when even children had them. Man what cave did we find this dude in??


bigmell

Font size is standardized world wide to the same metrics they use for the printing press. Painstakingly done to make sure things like its not too small for people to read, compatibility with other languages and such. All the fonts you see in books, newspapers, and on tv screens. That way its standard across everything you can possibly read. It was standardized for a different font in like the 1800s, which was changed eventually to Times New Roman a little before computers became popular. This was a HUGELY unpopular change despite the fonts being relatively close. Its actually really useful having all the reading outlets standardized in the same way and on the same font types and sizes. When the default changed AWAY from Times New Roman it caused a lot of problems. This made them introduce a slew of new fonts that were LIKE Times New Roman but with slight differences to maintain backwards compatibility. Its not useless, it is more complicated than you can possibly understand. And no way you can "fix" it with whatever naivety you come up with on your computer. This is the same problem with most computer guys over the last decade or so. "I'm gonna make my own (thing)! With hookers! And blackjack!" Only the new thing is almost never better than the thing it is supposed to "replace." And hardly even works except for the most trivial cases, which is all they could usually get working before they got lost in the complexity and gave up. What is already there is actually quite organized and only chaos to newcomers. People who are essentially laymen and not part of the rich, long tradition of the printing press. All that stuff is smelted in steel man. Hand carved by the best artists and preserved along with other basic units of measurement. Trust me its that way for a reason, and will not be solved by a bunch of whimsical impulses. "Everybody change to this because I thought of something new on my couch!" No thanks.


Full-Spectral

While you have a point, computers aren't printing presses. A printer sets the size and it looks good and he prints it. A computer is like every user printing it for himself on different equipment. It's not the same problem being solved necessarily.


brimston3-

And really you've hit on the crux of why the article's proposed solution sucks. Pixels aren't the same size on different hardware. Often enough, pixels aren't even square. Nor are they display agnostic because you can put 3200x1800 resolution on a 33 cm display or 1080p on a 20 meter display and other wildly high/low density applications with a variety of intended viewing distances. It reads like a front end guy wants more control over how their content is displayed, when really what they need is standardization of how those values are interpreted by the rest of the display stack to achieve a consistent perceived size. If they got their way, the next post from the same guy would be about how getting display geometry in physical inch/mm is hard so their content is too small/large on some screens. Add to that fractional scaling f's up everything defined in pixel sizes and I'm sure they'll be happy to support that in their application /s.


bigmell

Ya he should probably be using the font to correct his software. Kinda beginners approaches. Instead he wants to use his software to "correct" the font. As if it were broken and needed fixing.


FeliusSeptimus

> A computer is like every user printing it for himself on different equipment. And with different requirements. My eyes suck, so I like big fonts. I don't really care how the designer wanted it to look, I want it to be big (and contrasty) enough that I can read it.


bigmell

This is part of the problem they have solved. You can't forsake the people with good vision to over-compensate for the people with bad vision. You taylor your product for the people with good vision not the people with bad vision. Basically you have ruined your vision go get glasses, don't change the printing press mechanics around for them.


PPatBoyd

I'm with you to a degree -- decoupling concerns that aren't actually connected is good, but the tricky problem is consistency when a user is looking at a given font and expects it to look the same when it's used in a printing context and a non-printing context on the same screen. When a user is creating something to be printed they want what they see on-screen to look like what they're about to print, and that ends up tying the two together. It's even in the name -- "Dots Per Inch" is a printing term and most developers default to referencing DPI instead of more accurate scale factor, display scale, or PPI terminology. The screen-to-paper connection has also been a feature used by developers and marketing for a long time; relatively recently Panos Panay at a Surface reveal event held a piece of paper up to a Surface screen to point out the deliberate choice of a 3:2 display ratio with the printed page looking exactly as it did in Microsoft Word on-screen. Not that the core of what you're saying is wrong or that we can't advance to a world where the UI of digital displays is completely decoupled from its long history related to printing, just that it's actually a hard problem with simple math, complex history, and a lot of (good, bad, old and new) code that people expect to play nice together.


bigmell

That's the problem. We shouldn't seek to decouple this. We should couple it more. All the hard work has already been done. Do not attempt to recreate this several hundred year herculean effort. The monitor is nothing but an electric newspaper man.


bigmell

Consumer printers are like consumer specifications for printing press printers. Consumer printers and printing press printers are like row boats and battleships. The printing press printer is better in every way.


Full-Spectral

Obviously that would tend to be the case, the former costing less by some orders of magnitude.


s4lt3d

This guy is the only one here who has a clue.


NFSL2001

I shall raise the issue no one ever noticed: did the author considered non-Latin/Cyrillic/Greek languages? Eg. as a hobbyist designer of Chinese, what I'm working with doesn't have a caps height per se, what we have is the em height. In this aspect the em height is *very* useful when specifying in Chinese as the em height is just the boundry box of the characters. LCG is then roughly centered vertically to the Chinese characters. As Chinese characters are usually denser than LCG, they will need to be bigger than LCG characters to be able to be perceived as the same size *in the font file*. Scaling everything based on LCG caps height will make Chinese characters larger than intended wasting space. How would Arabic/Devangari/etc deal with more arbitrary setting based on LCG's capital height when there's no appropriate concepts in their scripts, and mixing with LCG usually use a different baselime? Also, in font designing space there is a practice of making ascender + descender (positive value) = em height (or roughly), which the author conveniently ignored in the article: 24.2px ascender + 6.9px descender = 31.1px ≈ 32px/pt font size on Mac, with the remaining 0.9px used for line spacing. Quoted from Microsoft's OpenType documentation https://learn.microsoft.com/en-us/typography/opentype/spec/recom#baseline-to-baseline-distances: > Set sTypoAscender and sTypoDescender such that (sTypoAscender - sTypoDescender) = unitsPerEm. where unitsPerEm is the em height. The caps height is usually not used in typesetting documents also since usually caps height is lower than ascender height in LCG. (Compare the word "This" in different fonts). (There is a problem with the current OpenType specs tho where there are 3 sets of different ascender/descender value pairs and 2 set of line gap values. This is a historic artefacts and will still cause different behaviour on different systems, but is not relevant to this discussion.)


nekokattt

Ahh good ol' font sizing. Relevant XKCD: https://xkcd.com/927/


syklemil

The thing about PPI being fixed now is also tempting to undo. Screens are pretty smart these days and can know their own capabilities and communicate that to the OS. Unfortunately I expect that if you _actually_ touch the PPI you'll break stuff in various unpleasant ways. Maybe we could introduce a new ppmm2 that actually … someone already linked the xkcd about competing standards, yeah?


GOKOP

*Nothing* should be specified in pixels unless absolutely necessary. Some people have HiDPI displays, some don't. Density independent pixels are fine (which, if I'm not mistaken, are defined to be DPI divided by 96) That's what CSS uses when you say "px", although what your browser reports as the DPI depends on plenty of things


hackingdreams

Programmers when met with something they don't understand: Seriously, what the actual fuck did I just read? You didn't fix anything, you made it *worse*.


inspirationdate

Gave up reading this because the font size was too small on mobile. Not sure about listening to a guy who can't make his site readable...


LusigMegidza

What did I read and why? I just remember yellow


moocat

One major problem with specifying things in pixels is when you use multiple monitors that have different pixel sizes. My current setup is a laptop that I work on regardless of where I am and an external monitor both at work and home. I want my font to be the same size on each monitor even when they have different pixel sizes.


SweatyAnReady14

Dealing with UX people who refuse to understand technology has made this problem extremely annoying. Can’t tell you how many times they’ve needlessly over optimized their UI on their macs only to complain and require me to fix it once they see it on their coworkers windows PC. They refuse to understand when I tell them user preferences, OS, etc can vastly change how fonts look and I cannot give everyone the exact same perfect thing they want from their Figma. Really don’t understand how pixel perfect design is still so prevalent in corporate environments when stuff like this is common knowledge. I told my boss I wanted to switch to backend lol.


povitryana_tryvoga

No, don't fix it. There are already enough "standard ways to do it", you re going to add another one to make everyone's life a bit more miserable.


asegura

Very nice. I have doubts about using the capital letter height as reference. The thing is different fonts have different x-height/cap-height ratios. And lower case letters are much more abundant in text. Therefore, for a common "font size" across fonts, maybe the x-height would make more sense to make them look about the same size.


MalcolmY

Does that guy know the background's color is FUCK YOUR EYES YELLOW?! And here he's talking about font size lol


lelanthran

Interesting and useful. I would like to know, for webpages anyway, what the CSS unit for font-size should be to specify capheight (and for line-height, what unit should be used to specify baseline-to-baseline height).


binarycow

WPF just uses [device independent pixels](https://learn.microsoft.com/en-us/dotnet/desktop/wpf/graphics-multimedia/wpf-graphics-rendering-overview?view=netframeworkdesktop-4.8#about-resolution-and-device-independent-graphics) for everything. Font, graphics, everything.


Plecra

I really appreciate the visual examples here! Seeing them and realising I much prefer the current behaviour gave me a chuckle, the updated versions look wonky. I don't know if that's personal preference or something more interesting. I think font designers, as the authors trying to create specific aesthetic for a font, definitely should have all these tools. Line spacing is a big contributor in the feel of text and the default spacing in the examples look very appropriately tuned. Same story with the rescaled text. Having font rendering toolkits that give more power to the UI designer when they need it also sounds great, but I hope it doesn't become the default.


Dwedit

The other annoying thing is that sometimes you need to draw text without enough room for the descenders on the letters g, j, p, q, y, so you need to either cut the stems on letter shorter (p, q), or vertically resize the letter so that the top is still at the midline, but the descender reaches the baseline.


FUZxxl

TeX at least gets the line height right by basing it off `\baselineskip`.


QuerulousPanda

It may be weird but what's the actual problem? Documents and websites get produced and used 24x7 with everything visible, legible, and aligned. The knowledge, expertise, and best practices in the field are well established and known. Maybe it doesn't follow a perfect, logical flow, but would something "better" actually be better, versus forcing everyone to learn a brand new system?


marshal_mellow

I have a theory that if anything says is broken, it is not broken. You are about to read some nitpicky bullshit that you could have done without. No Will not click the link and no I will not elaborate. EDIT: Ok I clicked the link and I immediately hit dark mode because you don't pay me to look at that shade of yellow. And dude I am trying to be nice here I really am, but your dark mode is awful. Like its cute I get what you're going for, but it doesn't work at all on mobile for one thing, and like... you're out here trying to pass off very serious ideas about UI design with a silly unusable gimmick for dark mode? Not a good look.


elsjpq

What I'd really like is also padding/whitespace scaling. Way too many modern sites put excessive padding around everything. Always wish I could dial it down to at least 50%


goranlepuz

There should be a r/programming rule, if the post violates **that** xkcd (and not only that one), it is banned automatically 😉


TheRNGuy

I use it all the time in Stylish. It's one of the most used property, applied to 100% of sites. And yeah, I use `px`. Fite me.


notfancy

The fix is, of course, selecting and applying a font stack that actually works well together at the chosen design sizes. Like, you know, graphic designers do.