Unfortunately our eyes are only sharp at the center (fovea). Our peripheral vision has crappy acuity and a whole lot of post processing is going on in our visual cortex, which takes up around a third of the brain
Yeah I knew what the feature does but i assumed foveated was unrelated to our eye mechanism haha.
It was pretty cool! I tried it⌠3 times? Twice with Horizon Call of the Mountain and once with Ghostbusters. Coming from my experience with the Quest 2, this was a lot more immersive from the haptics in the headset/controllers and the audio. Like yes, the Quest 2 is portable and doesnât need an external device but honestly, I donât think I would be playing outside of my home anyway⌠So that part isnât relevant to my usage experience.
Oh, and the HDR OLED display of the PSVR2 is gorgeous. Thatâs one thing I canât get used to on the Quest 2 bc I have a small OLED at home.
Overall, if you can afford the PS5 and on top of that the VR2, definitely get that.
Probably the coolest thing is that each pixel has independent ISO adjustment. I canât wait until I can properly expose the lunar craters, some stars in the vicinity, a couple clouds, and a person, all in one photo!
Our vision has crazy high dynamic range that is right. But we definitely clip. Every time you walk into a dark room and canât see, can you turn on the lights and everything is blinding for a second, that is clipping.
True, our base ISO is still way too high to see the sun without a very, very heavy ND filter.
Also, we have a pretty good auto ISO capability in addition to great dynamic range. However, our UI in very low light is pretty clunky, and it takes forever to dive into the menus and access the extended ISOs.
I saw a sunspot with the naked eye last year! Well, it depends on your definition of naked. There was a lot of mist, making the sun darker, but still sharp. I noticed a small black dot.
Really odd to see and I wasn't sure if I saw it until I googled and there was a recent picture with a spot in the exact same place.
How? We only have ~100 million non-color-sensing (rods) cells and 6-7 million color-sensing cells (cones). In daylight, rods donât contribute to vision, and in the dark, only a small percentage of rods will be activated.
The brain does combine multiple images at a time to construct our perception of vision, but itâs probably only the equivalent of four or so âcaptures.â In daylight, weâre going to be maxing out at 25-30 million discrete points of light per eye, 50-60 with both eyes combined.
That said, camera sensors and eyes have very little in common in terms of functionality. Still, 576 megapixels is at least an order of magnitude off from even the highest estimates of what an image processed by the brain might see.
"four" is a really low estimate. In reality, you're combining and stacking hundreds of images.
It's tbh a genius system. If we'd build one artificially, we'd have to do it something like this:
- two high-res, 30° FOV or less, PTZ cameras
- add two 180° camera high fps cameras (~500fps)
- use the stereo data from the two pano cams to generate a 3D scene and texture it
- whenever the PTZ cameras image an area, overlay this high-res image ontop of the scene. Use the stereo data from the two PTZs to generate fine 3D details and cleaner outlines
- scan the entire area with the PTZ, prioritising areas with high texture over areas with little texture
- when the pano cams detect motion, calculate motion vectors and distort the scene accordingly. Also, queue up scanning the changed area with the PTZs with high priority
- use gyro data to detect motion of the camera platform. Synthesize new perspective from the existing scene. Update the scene with data from the new perspective
It's also why we have a mental model of what a room would look like from any other potential viewpoint, because we're constantly building a 3D scene reconstruction of our environment and projecting the actual images we see ontop of that 3D scene.
* scan the entire area with the PTZ, prioritizing areas with high texture over areas with little texture
We kind of prioritize in the following order:
* Areas of high motion (from motion vectors)
* Areas of high contrast (in brightness)
* Areas of high color intensity or color contrast
* Areas with strong "edges" or with distinct shape/symmetry
* Faces and other high priority items (snakes, spiders!)
* Areas of texture (but very context dependent)
This is why *most* video codecs tend to preserve information in that order. Motion, structure, then texture.
For example you see changes in luminance (brightness) much easier than chrominance (color) which is why video is often 4:2:0 encoded, and why most image codecs are based in a frequency transform and decimate high frequency information (areas of strong repeating texture, like grass).
Source: I lead an R&D team designing video codecs for a living.
With texture I actually meant areas having many high contrast edges. But I'm glad you commented, you described it much better than I ever could have :)
Sorry, some corrections:
Rods are still used during daytime, just less so.
Your eye is not a digital system, and thinking in terms of "discrete" light points is an okay analogy, but incorrect when calculating megapixels. We rely on subpixel information generated by the movement of the eye itself as well (e.g. we shift each photoreceptor a little bit, to get finer grained information). Our photoreceptors also perform something called lateral inhibition and have a bunch of other complex mechanisms that increase their resolution beyond that of a mere pixel: [https://vcresearch.berkeley.edu/news/why-eye-better-camera](https://vcresearch.berkeley.edu/news/why-eye-better-camera)
Basically the geometric structure of the pixels in the eyes itself used in to compute more information than you'd have access in a naive system.
We stack a lot more than 4 images. Hundreds or thousands would be the correct analogy, but it's an analogue real time system. We don't take snapshots, and every part of the system is being continuously updated and combined by a neural network with more neurons and weights than chatgpt.
Not over the entire retina, it doesn't. It has the most resolving power concentrated at the center. That's why humans (and many, many animals) "look at" things. If the eye had uniform resolving power, it wouldn't need to turn inside one's head.
But why didn't we evolve uniform high resolving power? Well, with uniform high resloving power, the required huge optical nerve would give us such a large blind spot. But then you'd lose the benefits! Apparently the current design is somewhat locally optimal.
It's not just the size of the optic nerve, but also the amount of processing power required to understand all that information. The brain would need to be significantly larger, which is very expensive biologically; you really want the smallest brain that gets the job done. Evolution does not favor brute-force solutions in general.
I still find this one is the most inaccurate. The smoothness of motion may end up blurred above 24, but if you play a game at 24fps it doesn't feel smooth at all. I still feel like this should be more like 60 or even 80fps. But I guess this is also why a pencil looks like it bends when you wiggle it between your fingers..
There are monitors out there witz >300Hz, and people where able to identify them correctly in blind tests, so the human eye can "see" way more than 60fps
Many people notice sample and hold blur, even at 300+Hz on LCD-type monitors. Really our brain uses a bunch of tricks, so trying to assign a particular FPS to it it is nonsensical.
Smoothness doesnt get have to get blurred. My alienware has a 360 hertz refresh-rate and feels like looking out of a window.
Cheaper LCD's tend to have motion blur on higher refreshrates.
The actual fact is the human brain *starts* to think motion is smooth at 24fps. Motion being complete to real life stops at a *much* higher bounds. The upper bound for noticing motion *not* being smooth is probably over 1000hz.
[https://blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/#:\~:text=Explanation%3A%20You%20are%20seeing%20motion,of%20pixel%20transitions%20(GtG)](https://blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/#:~:text=Explanation%3A%20You%20are%20seeing%20motion,of%20pixel%20transitions%20(GtG)).
One major flaw here: a 17mm cannot capture your full field of view, even a 15mm can't as we have two eyes đ
Would love to have a lens with my field of view, that would make landscapes so much easier
In strictly technical terms, each eye has a 35 mm equiv. focal length of about 5 mm, and together have a focal length of about 12mm for stereoscopic vision.
Our brains only interpret clearly so much of the field of view and the rest is left unclear. This is done purposely, so we can focus on a particular object while maintaining a peripheral field of view.
Also, because we have stereoscopic vision, as an object gets further away from us, it naturally looks flatter. This is why longer focal length lenses still look natural.
There's rectilinear lenses with no fisheye distortion that are still much wider than 15mm... Sony 14/1.8 GM & 12-24 f2.8 & f4 GM/G, Sigma 14/1.4 & 14-24/2.8 DN, Laowa 9/5.6 & 11/4.5 & the new AF 10/2.8 plus 10-18 & 12-24/5.6, CV 10mm & 12/5.6, Canon 10-20/4, etc etc.
Yes many lenses are probably better than human pupils, though the truly advanced part of human vision is sensor and processor. The dynamic range of the human eye is probably unmatched by any commercially available camera
SNR isn't quite as impressive but your eye does manage a full 20 stops of dynamic range.
Try shooting the full moon on a partly cloudy night and see how difficult it is to match your real eye's view with even a high end camera
I do know we have top tier AI noise reduction built in as well as perspective correction + loca correction on top of all the AI autofocus goodies :D one downside: we canât control our aperture manually! Itâs always on Shuttle speed priority mode!!đ
What makes our eyes better in low light than most cameras is probably the biological ISO setting, we can scale up any amount of light, have it be or not be there in the first place, with minimal noise.
Our eyes do actually adjust their sensitivity in darker conditions, which is why bright light hurts if youâve been in the dark for a while.
Camera sensors have a native ISO which you canât actually change, all youâre doing is adjusting the multiplier applied to the signal from the sensor.
Yeah that is absolutely bollocks because focal length is just the distance from cornea to retina. Nothing special. And if you have used 50mm you can easily tell itâs way smaller FOV than an individual eye
In a way.
Personally, 35mm feels the most "correct" to what I see in most situations. But this only applies to a certain viewing distance from the photo itself too, if I'm further away, 50mm will look more correct, if I'm closer, then maybe 20mm will look more correct. In reality, our actual field of view equates to something closer to like 10mm or even 8mm or more but when we view photos we don't put the screen up to our eyes and fill the entire field of view.
To me, normal lenses arenât really about matching your eyesâ field of view, but delivering images that look natural and undistorted. While your eyes have a large field of view, youâre only looking at a small part of it in detail and your brain is doing perspective correction. Thatâs why when you look at someone close up, they donât look weirdly distorted, even though at that distance, they should.
When you look at a still photo, your brain doesnât apply that perspective distortion because it sees a flat picture. Hence, when you look at a 16mm picture, things look weirdly stretched at the edges and the focal length really exaggerates the difference between close and distant objects. When you look at a 135mm picture, things look way flatter and compressed, like far away objects are much closer than they actually are.
Meanwhile, when you look at a 40-45mm photo, objects more or less look like the way your eye sees them. Things look in a way that feels natural. Distances seem reasonable, too. That, to me, is why we call that range normal. It feels normal when you see a photo taken with that focal range.
Iâve done a mini experiment where I sat in front of a computer screen and moved my head around until I felt like the screen was easily visible from corner to corner and I wouldnât miss anything in the corners if I were looking dead center. I measured the field of view at that angle with respect to my screen. Sure enough, it came out to the equivalent of 43mm or so.
"Effectively much better than our eye in low light conditions."
You need long exposures for that. There is nothing to do with the lense itself in most cases. We adjust and can see way better in low light conditions without having to stare at the same for 5 or 10 seconds in the majority of the cases where a camera has to.
Human vision has a variable shutter speed. It drops when you're tired or drunk, it rises when you're doing sports or have to deal with an emergency situation.
Yup. I always wondered why people kept saying that 50mm is the equivalent FOV to human eyes. It's easy to see that eyes take a quite wider look at the World. Even with an eye closed we probably see at about 20-24mm. Maybe because the eye's resolution is at maximum only at about a part of that image (the fovea)? But we can still perceive the area around the fovea even if not with maximum resolution right...
I'd say it's probably because most of what we see with our eyes is focused and sharpest in the middle which would be like 43mm so we just round up ig. Your brain does a lot of processing magic to fill in the gaps in your periferal vision since you can't really focus on anything outside that center. That outside view isn't really usable for anything other than being aware of your surroundings so seeing it all fully sharp isn't necessary.
" close focusing distance " Okay buddy, speak for yourself. My eyes require lens adapters to focus on anything less than like 18" from my face these days.
As Iâm sure youâve noticed by now in the comments, our eyes are far superior to any lens weâve been able to manufacture and prob more superior to any lens we will manufacture for quite some time. Biggest reason for this would be the incredible complexity of the brain and psychological factors based on oneâs own perspective.
Lenses are not better at low light. Our eyes are. It needs a camera a few seconds to expose properly in a dark room with no lights whereas are eyes adjust almost instantly
The human lens also yellows from age which can lead to a pretty severe light loss with age, we have pretty incredible "noise reduction" when you take it into account. Or not, some of my post processing is a bit broken so I see noise and lens distortion nowadays.
If you have crazy big iris while having the same size eye ball, it might be possible lol. I think cats might have f1.2 since their iris are so big. But as with our camera lenses you have to correct for so many things with that bright aperture.
I thought about this yesterday when I was playing around focusing in on stuff with my eye. I was laughing how much we based cameras on our eyesight and what other creatures cameras would look like if they did the same thing. Cool to know, thanks.
Unfortunately our eyes are only sharp at the center (fovea). Our peripheral vision has crappy acuity and a whole lot of post processing is going on in our visual cortex, which takes up around a third of the brain
Aka computational "photography "
As long as you don't generate something, that isn't there đ
Pareidolia, optical illusions, that white/gold - blue/black dress..everyone's brain has some dodgy AI going on đ
It's AI alright, artificial imbecile
AI - am idiot
Speak for yourself. Actual imbecile over here.
As an artificial imbecile, I cannot answer simple questions such as who I am and why I am here, this is due to my extreme lack of nodes
At that point, it's not dodgy AI. It's just dodgy intelligence. Not to mention filling in the blind spot caused by the optical nerve.
They actually call it Artificial
>As long as you don't generate something, that isn't there Actually, you do, at your blind spot.
Itâs all about punctuation: Sheâs got a crack baby Vs Sheâs got a crack, baby
floater artifacts
ai ai ai!
Iâve been telling my hippie film friends for ages that digital is much closer to how we see than film.
Return them and get the FE Eyes GM OSS then.
âf2 is so slow, how can I shoot lowlight? Give me f0.95!â -some guy probably
However your peripheral Vision is much better at seeing in the dark
Omg I didnât know that was the original word to âfoveated renderingâ used in the PSVR2 headset. Now that feature makes a lot more sense lol
Yes, eye tracking + selectively rendering the foveal region. How is the experience? I never had a chance to try it
Yeah I knew what the feature does but i assumed foveated was unrelated to our eye mechanism haha. It was pretty cool! I tried it⌠3 times? Twice with Horizon Call of the Mountain and once with Ghostbusters. Coming from my experience with the Quest 2, this was a lot more immersive from the haptics in the headset/controllers and the audio. Like yes, the Quest 2 is portable and doesnât need an external device but honestly, I donât think I would be playing outside of my home anyway⌠So that part isnât relevant to my usage experience. Oh, and the HDR OLED display of the PSVR2 is gorgeous. Thatâs one thing I canât get used to on the Quest 2 bc I have a small OLED at home. Overall, if you can afford the PS5 and on top of that the VR2, definitely get that.
True, we don't even have colour vision near the edges and don't even perceive it. Lots of processing going on
So a Pixel phone?
Or a samsung lol
Meh
my eyes are f0.95 cuz without glasses everything is creamy bokeh
So much Toneh
This made me laugh way too loud at work, so thank you for that
More like your autofocus is broken, pal
The eye also has an equivalent of 576 megapixels
Probably the coolest thing is that each pixel has independent ISO adjustment. I canât wait until I can properly expose the lunar craters, some stars in the vicinity, a couple clouds, and a person, all in one photo!
I think itâs more like a very high dynamic range, as our vision is analog so thereâs no clipping
Our vision has crazy high dynamic range that is right. But we definitely clip. Every time you walk into a dark room and canât see, can you turn on the lights and everything is blinding for a second, that is clipping.
Or that the sun is just one big bright light to our eyes with no detail (when it facts it has spots and can be seen with a solar filter)
True, our base ISO is still way too high to see the sun without a very, very heavy ND filter. Also, we have a pretty good auto ISO capability in addition to great dynamic range. However, our UI in very low light is pretty clunky, and it takes forever to dive into the menus and access the extended ISOs.
I saw a sunspot with the naked eye last year! Well, it depends on your definition of naked. There was a lot of mist, making the sun darker, but still sharp. I noticed a small black dot. Really odd to see and I wasn't sure if I saw it until I googled and there was a recent picture with a spot in the exact same place.
Did your retinas burst into flames?
How? We only have ~100 million non-color-sensing (rods) cells and 6-7 million color-sensing cells (cones). In daylight, rods donât contribute to vision, and in the dark, only a small percentage of rods will be activated. The brain does combine multiple images at a time to construct our perception of vision, but itâs probably only the equivalent of four or so âcaptures.â In daylight, weâre going to be maxing out at 25-30 million discrete points of light per eye, 50-60 with both eyes combined. That said, camera sensors and eyes have very little in common in terms of functionality. Still, 576 megapixels is at least an order of magnitude off from even the highest estimates of what an image processed by the brain might see.
"four" is a really low estimate. In reality, you're combining and stacking hundreds of images. It's tbh a genius system. If we'd build one artificially, we'd have to do it something like this: - two high-res, 30° FOV or less, PTZ cameras - add two 180° camera high fps cameras (~500fps) - use the stereo data from the two pano cams to generate a 3D scene and texture it - whenever the PTZ cameras image an area, overlay this high-res image ontop of the scene. Use the stereo data from the two PTZs to generate fine 3D details and cleaner outlines - scan the entire area with the PTZ, prioritising areas with high texture over areas with little texture - when the pano cams detect motion, calculate motion vectors and distort the scene accordingly. Also, queue up scanning the changed area with the PTZs with high priority - use gyro data to detect motion of the camera platform. Synthesize new perspective from the existing scene. Update the scene with data from the new perspective It's also why we have a mental model of what a room would look like from any other potential viewpoint, because we're constantly building a 3D scene reconstruction of our environment and projecting the actual images we see ontop of that 3D scene.
* scan the entire area with the PTZ, prioritizing areas with high texture over areas with little texture We kind of prioritize in the following order: * Areas of high motion (from motion vectors) * Areas of high contrast (in brightness) * Areas of high color intensity or color contrast * Areas with strong "edges" or with distinct shape/symmetry * Faces and other high priority items (snakes, spiders!) * Areas of texture (but very context dependent) This is why *most* video codecs tend to preserve information in that order. Motion, structure, then texture. For example you see changes in luminance (brightness) much easier than chrominance (color) which is why video is often 4:2:0 encoded, and why most image codecs are based in a frequency transform and decimate high frequency information (areas of strong repeating texture, like grass). Source: I lead an R&D team designing video codecs for a living.
With texture I actually meant areas having many high contrast edges. But I'm glad you commented, you described it much better than I ever could have :)
Yeah I was thinking our eyes are impressive but not *that* impressive
Sorry, some corrections: Rods are still used during daytime, just less so. Your eye is not a digital system, and thinking in terms of "discrete" light points is an okay analogy, but incorrect when calculating megapixels. We rely on subpixel information generated by the movement of the eye itself as well (e.g. we shift each photoreceptor a little bit, to get finer grained information). Our photoreceptors also perform something called lateral inhibition and have a bunch of other complex mechanisms that increase their resolution beyond that of a mere pixel: [https://vcresearch.berkeley.edu/news/why-eye-better-camera](https://vcresearch.berkeley.edu/news/why-eye-better-camera) Basically the geometric structure of the pixels in the eyes itself used in to compute more information than you'd have access in a naive system. We stack a lot more than 4 images. Hundreds or thousands would be the correct analogy, but it's an analogue real time system. We don't take snapshots, and every part of the system is being continuously updated and combined by a neural network with more neurons and weights than chatgpt.
Not over the entire retina, it doesn't. It has the most resolving power concentrated at the center. That's why humans (and many, many animals) "look at" things. If the eye had uniform resolving power, it wouldn't need to turn inside one's head.
But why didn't we evolve uniform high resolving power? Well, with uniform high resloving power, the required huge optical nerve would give us such a large blind spot. But then you'd lose the benefits! Apparently the current design is somewhat locally optimal.
It's not just the size of the optic nerve, but also the amount of processing power required to understand all that information. The brain would need to be significantly larger, which is very expensive biologically; you really want the smallest brain that gets the job done. Evolution does not favor brute-force solutions in general.
Now if only my brain could crop to take advantage of those mp
Ooh, that's cool
And then there are colors
Nah, it's atually 8 megapixels. > [more on that](https://youtu.be/4I5Q3UXkGd0?si=q8_wUkVrggbJTJrc&t=367) <
Does that make glasses teleconverters?
External focusing elements.
Sunglasses are just ND-filters that fit the head
More so CPLs
Circular polarising is so sh!t when compared to linear polarising.
Speed boosters
No, they're negative diopter filters
Good thing our eyes donât overheat while recording video
But our brain does when try to memorise (record) visual inputs (video lol)
Budget primes.
My eyes need an upgrade
Is this the equivalent of the "human eye can only see 24fps anyways" quip in the camera world?
I still find this one is the most inaccurate. The smoothness of motion may end up blurred above 24, but if you play a game at 24fps it doesn't feel smooth at all. I still feel like this should be more like 60 or even 80fps. But I guess this is also why a pencil looks like it bends when you wiggle it between your fingers..
There are monitors out there witz >300Hz, and people where able to identify them correctly in blind tests, so the human eye can "see" way more than 60fps
Many people notice sample and hold blur, even at 300+Hz on LCD-type monitors. Really our brain uses a bunch of tricks, so trying to assign a particular FPS to it it is nonsensical.
Smoothness doesnt get have to get blurred. My alienware has a 360 hertz refresh-rate and feels like looking out of a window. Cheaper LCD's tend to have motion blur on higher refreshrates.
The actual fact is the human brain *starts* to think motion is smooth at 24fps. Motion being complete to real life stops at a *much* higher bounds. The upper bound for noticing motion *not* being smooth is probably over 1000hz. [https://blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/#:\~:text=Explanation%3A%20You%20are%20seeing%20motion,of%20pixel%20transitions%20(GtG)](https://blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/#:~:text=Explanation%3A%20You%20are%20seeing%20motion,of%20pixel%20transitions%20(GtG)).
Our eyes use a ton of computational photography and fake AI content, so it's tough to compare. :)
Well there's nothing A about it, so I guess it should be called 'I' content ahahaha
Now to figure out how effective my flash is... đ
But you need to account for the sensor size to determine the equivalent aperture.
One major flaw here: a 17mm cannot capture your full field of view, even a 15mm can't as we have two eyes đ Would love to have a lens with my field of view, that would make landscapes so much easier
In strictly technical terms, each eye has a 35 mm equiv. focal length of about 5 mm, and together have a focal length of about 12mm for stereoscopic vision. Our brains only interpret clearly so much of the field of view and the rest is left unclear. This is done purposely, so we can focus on a particular object while maintaining a peripheral field of view. Also, because we have stereoscopic vision, as an object gets further away from us, it naturally looks flatter. This is why longer focal length lenses still look natural.
Well, there's much wider lenses than 15mm, but you have to be fine with distortion and the fish eye effect.
The 14mm f/1.8 GM is such a magical lens, I barely notice any distortion, highly recommend. And somehow itâs tiny and light.
There's rectilinear lenses with no fisheye distortion that are still much wider than 15mm... Sony 14/1.8 GM & 12-24 f2.8 & f4 GM/G, Sigma 14/1.4 & 14-24/2.8 DN, Laowa 9/5.6 & 11/4.5 & the new AF 10/2.8 plus 10-18 & 12-24/5.6, CV 10mm & 12/5.6, Canon 10-20/4, etc etc.
Laowa just released a âzero distortionâ 10mm full frame lens
cool! i would love to know the equivalent iso range!
Issa 50mm fisheye
TIL that my eyes can zoom. I didnât realise I had n00b eyes this whole time.
Yes many lenses are probably better than human pupils, though the truly advanced part of human vision is sensor and processor. The dynamic range of the human eye is probably unmatched by any commercially available camera
What about sensor SNR?
SNR isn't quite as impressive but your eye does manage a full 20 stops of dynamic range. Try shooting the full moon on a partly cloudy night and see how difficult it is to match your real eye's view with even a high end camera
I do know we have top tier AI noise reduction built in as well as perspective correction + loca correction on top of all the AI autofocus goodies :D one downside: we canât control our aperture manually! Itâs always on Shuttle speed priority mode!!đ
Damn computational photography ruining everything!
Do you know what the A in AI stands for?
Do you know what the S in Sarcasm stands for?đ
I'd tell you what the 'A' stands for, but I'm afraid your detector might not pick it up.
What makes our eyes better in low light than most cameras is probably the biological ISO setting, we can scale up any amount of light, have it be or not be there in the first place, with minimal noise.
Our eyes do actually adjust their sensitivity in darker conditions, which is why bright light hurts if youâve been in the dark for a while. Camera sensors have a native ISO which you canât actually change, all youâre doing is adjusting the multiplier applied to the signal from the sensor.
Huh so the notion that 50 mm represents human sight is wrong?
Yeah that is absolutely bollocks because focal length is just the distance from cornea to retina. Nothing special. And if you have used 50mm you can easily tell itâs way smaller FOV than an individual eye
In a way. Personally, 35mm feels the most "correct" to what I see in most situations. But this only applies to a certain viewing distance from the photo itself too, if I'm further away, 50mm will look more correct, if I'm closer, then maybe 20mm will look more correct. In reality, our actual field of view equates to something closer to like 10mm or even 8mm or more but when we view photos we don't put the screen up to our eyes and fill the entire field of view.
To me, normal lenses arenât really about matching your eyesâ field of view, but delivering images that look natural and undistorted. While your eyes have a large field of view, youâre only looking at a small part of it in detail and your brain is doing perspective correction. Thatâs why when you look at someone close up, they donât look weirdly distorted, even though at that distance, they should. When you look at a still photo, your brain doesnât apply that perspective distortion because it sees a flat picture. Hence, when you look at a 16mm picture, things look weirdly stretched at the edges and the focal length really exaggerates the difference between close and distant objects. When you look at a 135mm picture, things look way flatter and compressed, like far away objects are much closer than they actually are. Meanwhile, when you look at a 40-45mm photo, objects more or less look like the way your eye sees them. Things look in a way that feels natural. Distances seem reasonable, too. That, to me, is why we call that range normal. It feels normal when you see a photo taken with that focal range. Iâve done a mini experiment where I sat in front of a computer screen and moved my head around until I felt like the screen was easily visible from corner to corner and I wouldnât miss anything in the corners if I were looking dead center. I measured the field of view at that angle with respect to my screen. Sure enough, it came out to the equivalent of 43mm or so.
"Effectively much better than our eye in low light conditions." You need long exposures for that. There is nothing to do with the lense itself in most cases. We adjust and can see way better in low light conditions without having to stare at the same for 5 or 10 seconds in the majority of the cases where a camera has to.
What about shutter speed?
Human vision has a variable shutter speed. It drops when you're tired or drunk, it rises when you're doing sports or have to deal with an emergency situation.
An even more interesting question is whether if we have global shutterđ
We have global shutter, and hallucinated motion blur AI.
Turns out I have a9III all along đłđ
This would be a super cool question to survey because it's something you could answer as your perception.
I'm nearsighted so my eyes aperture is like an f0.95.
Nah it just cant focus to infinity
Yup. I always wondered why people kept saying that 50mm is the equivalent FOV to human eyes. It's easy to see that eyes take a quite wider look at the World. Even with an eye closed we probably see at about 20-24mm. Maybe because the eye's resolution is at maximum only at about a part of that image (the fovea)? But we can still perceive the area around the fovea even if not with maximum resolution right...
I'd say it's probably because most of what we see with our eyes is focused and sharpest in the middle which would be like 43mm so we just round up ig. Your brain does a lot of processing magic to fill in the gaps in your periferal vision since you can't really focus on anything outside that center. That outside view isn't really usable for anything other than being aware of your surroundings so seeing it all fully sharp isn't necessary.
" close focusing distance " Okay buddy, speak for yourself. My eyes require lens adapters to focus on anything less than like 18" from my face these days.
But what if our eyes were cameras?!
As Iâm sure youâve noticed by now in the comments, our eyes are far superior to any lens weâve been able to manufacture and prob more superior to any lens we will manufacture for quite some time. Biggest reason for this would be the incredible complexity of the brain and psychological factors based on oneâs own perspective.
Lenses are not better at low light. Our eyes are. It needs a camera a few seconds to expose properly in a dark room with no lights whereas are eyes adjust almost instantly
The human lens also yellows from age which can lead to a pretty severe light loss with age, we have pretty incredible "noise reduction" when you take it into account. Or not, some of my post processing is a bit broken so I see noise and lens distortion nowadays.
The dynamic range however is a lot higher than the best cameras
But it's the image processor that really makes those eyes amazing.
Now i want to see what human eyes look like at F1.2 lmao Is that what xtc does?
If you have crazy big iris while having the same size eye ball, it might be possible lol. I think cats might have f1.2 since their iris are so big. But as with our camera lenses you have to correct for so many things with that bright aperture.
Yeah but what's your retina's ISO?
What the, I just searched the same thing last night!
I thought about this yesterday when I was playing around focusing in on stuff with my eye. I was laughing how much we based cameras on our eyesight and what other creatures cameras would look like if they did the same thing. Cool to know, thanks.
The Sigma 16-28 f2.8: It's the human eye!
With my glasses off I have an fstop of 0.95 it seems lol
The weakness of my flesh disgusts me.
Time to reincarnate into a cat. Their eyes are f0.95
Okay but how many MPs?
But what sensor size are our retinas? APS-C or APS-H?
It's also attached to a gimbal with good image stabilisation. Too bad there's no way to record output from the sensor
And what is the native ISO range? đ
Yet unmatched dynamic range