Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Wednesday 22 August 2018

Designing cameras based on the algorithms, not the people, that will use them

Some friends and I used to joke that we should develop a company "Hughes Taylor Smith : Makers of Crap Telescope Optics". Precision mirrors are difficult, so we'd just make big crappy mirrors but scan them reeeeeally accurately to correct their resulting lousy images.

Apparently, this is not quite as bonkers as we thought it was.

Originally shared by Event Horizon

University of Utah electrical and computer engineering associate professor Rajesh Menon argues that all cameras were developed with the idea that humans look at and decipher the pictures. But what if, he asked, you could develop a camera that can be interpreted by a computer running an algorithm?

"Why don't we think from the ground up to design cameras that are optimized for machines and not humans. That's my philosophical point," he says.

A salient philosophical point, at that. In the march towards an almost entirely detached perceptual matrix, abstracted away from the vicissitudes of human wetware and into the pure linear reason of the machine, should we worry about the primary sensations and their categorisation and analysis ? There is a lot of trust at play here, ultimately, but in regards to the reduction of vision to a task made autonomously simple for data collection and analysis at a machine level, it is a sensible step to take.

Questioning the core methods and purposes of image acquisition is an example of innovation in which the axioms and primary assumptions of a system or a process are reconfigured in useful ways.

If a normal digital camera sensor such as one for a mobile phone or an SLR camera is pointed at an object without a lens, it results in an image that looks like a pixelated blob. But within that blob is still enough digital information to detect the object if a computer program is properly trained to identify it. You simply create an algorithm to decode the image.

Through a series of experiments, Menon and his team of researchers took a picture of the University of Utah's "U" logo as well as video of an animated stick figure, both displayed on an LED light board. An inexpensive, off-the-shelf camera sensor was connected to the side of a plexiglass window, but pointed into the window while the light board was positioned in front of the pane at a 90-degree angle from the front of the sensor. The resulting image from the camera sensor, with help from a computer processor running the algorithm, is a low-resolution picture but definitely recognizable. The method also can produce full-motion video as well as color images, Menon says.

The process involves wrapping reflective tape around the edge of the window. Most of the light coming from the object in the picture passes through the glass, but just enough—about 1 percent—scatters through the window and into the camera sensor for the computer algorithm to decode the image.


https://techxplore.com/news/2018-08-computerized-camera-optics-ordinary-window.html

7 comments:

  1. They fixed the hardware problem of 'Lens missing' in software.

    ReplyDelete
  2. The insect eye used the pixel for millions of years before the lensed eye we know and love evolved.

    ReplyDelete
  3. Using a properly focused lens will be more effective given a particular aperture area, but this type of sensor can be thinner and might also be suitable for applications where a lens just doesn't work right.

    In particular, a lens just can't solve the "webcam problem" - the problem where eyes are looking at the screen rather the camera. You might integrate the camera into the screen, but that'll only work if the other person happens to be centered on the place you chose to put the camera.

    Getting rid of the lens means this type of sensor might be more easily integrated with a display without creating a blank dot for a lens somewhere. Instead, the sensor elements are invisibly spread across the screen, or spread around the border of the screen. The total area is much larger than a lens, but the way it can be spread around makes it have a lower design impact and lower marginal cost.

    ReplyDelete
  4. Why is machine reasoning linear? What is 'nonlinear' reasoning?

    ReplyDelete
  5. Chris Greene : In this sort of context, 'linear reasoning' means roughly 'optimises well on a highly pipelined vector processor'.

    ReplyDelete
  6. Andres Soolo oh so out of order execution and speculative execution still count as linear? Thanks; was a bit confused there, makes sense now. :)

    I was worried it was some sort of deep philosophical terminology.

    ReplyDelete
  7. Chris Greene : Modern vector processors are better at dealing with branching than classic Crays, so the boundary has been shifting over time.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Human Kind

I suppose I really should review Bregman's Human Kind : A Hopeful History , though I'm not sure I want to. This was a deeply frustra...