[rando] What is behind that pixel?
I have no idea why this occurred to me or why I'm ...wait... yes, still motivated to type it out.
From a computer's point of view, pixels get "bigger" as they get "farther away".
So, just like those piles of junk that make hi-res, low-entropy shadows, so to can the computer build something like that, if there were ever a reason to.
There's no objective here, but...
Given a computer image (made from pixels), there are infinite things that could be constructed in 3D that, when viewed from the right "place", look identical to a the photo.
You can imagine each pixel to be a sphere of color. If there's a red pixel, it could be represented by a nearby but small red ball, or a distant but big red ball.
If you cheated and you had perfect knowledge of a 3D scene and knew the location and direction that corresponded to the computer image, you could then fix the pixel-balls in all three dimensions.
If you have a red pixel that lands where a red barn is, and this barn is 100 meters from the location of the computer image's "camera" (and you know the resolution) you can calculate that the ball is ...3cm in diameter.
If you have a white pixel that lands where a snowcapped peak is, 2km away, you can calculate that this white ball is 4m in diameter.
If you have a few more pics from other locations, you have correspondingly billions more balls.
Obviously, just for fun, the computer can now give you an image derived from this model in a new location. It can even have algorithms to fill in the gaps that will appear as you move the prospective, allowing you do some light reality sparkling that always can be optimized yet more.
IT'S REAL FUCKING HANDY THAT THE COMPUTER HAD A 3D MODEL TO REFER TO IN THE FIRST PLACE!
Yes. Yees, a little too handy if you ask me.
How do we do all this without the reference model? nVidia knows.
Which leads your author off on another wild tangent.
I wonder if a tiny chunk of material (anything) can have identifiable light-related characteristics. I mean, yes. It totally can and even does, or maybe must. That's how we taste distant stars. But what is easy and cheap? We can use light from a whole spectrum of wavelengths.
Maybe, if you're mostly dealing with carbon and silicon compounds you can probably hand-pick some techniques and put it all on a PCB.
If we point a light sensor at a 1cm square place on a brick wall for example, and we went through our sequence of light probings, you could say, "this square has this signature", which maybe we can expect to vary from square to square enough to uniquely identify them.
So when we take another picture from another location that includes this same 1cm square and we manage to identify it again, we now have a triangle!
All interesting and useful, I'm sure you will agree. This ends the presentation.