Sunday, September 11, 2016

Camera iPhone 7 Plus is ‘the best of Apple’: what does that mean? – CNET in Spanish

Apple has unveiled the first major update really your iPhone camera that promises improved picture quality beyond the annual incremental changes (and beyond auxiliary updates, such as flash or image stabilization) from the original model. IPhone 7 Plus brings new capabilities – which some have described more accurately than others – and you need to understand. This is what they mean.



Double camera

Implementations two camera modules are a form of computational photography, which uses algorithms on the device to do what cost constraints prevent and size make a single sensor and lens. It has been already used in the introduction of automatic panoramic composition, and includes capabilities such as automatic HDR popular for multiple shots. So dual-chamber systems are just one step in a long list of changes that look similar to the approach 16 camera module Light.

The application Dual Camera Apple does not look like a genius, but it seems pretty good; the only computational aspect of the system is unfocused background to be able to once the software is upgraded performed (depth of field see below). The phone has a 12-megapixel camera with a 28mm f1.8 lens and 12 megapixel camera with a 56mm f2.8 lens.



This lens ‘telephoto’

Apple refers to the camera with the lens of 56mm as “telephoto”. That’s not a telephoto lens, it is only twice the magnification. It is a vision lens “normal” angle; about 70 mm or more is considered telephoto. However, 56mm is a good length for portraits and other scenes where you do not want to distortion and reduction of the target obtained with a typical wide-angle lens on a phone.



Optical Zoom

Apple is a bit confusing at this point; not intended cry out “2x optical zoom” and instead says that the specifications are the lines “2x optical zoom,” which is a subtlety that do not capture many people. I guess, technically, the system could be interpreted as an optical zoom: have a 28 mm lens and a lens for 56mm, so you’re getting two different magnifications using lenses. (And the G5 LG did that first.) However, “zoom” implies that going from one to another with stops in between; The only reason why “zoom” may make sense in this context is that 56mm is the next step after 28 mm.

If the second chamber has a lens of 70 mm, for example, the jump from 28 mm to 70 mm would be virtually no optical zoom. In practice, that is bi-focal length. Multiple camera systems can sometimes computationally focus between the two focal lengths, but Apple simply switch from one chamber to the other with a switch and calls 1x and 2x. The Hasselblad True Zoom Moto Moto Mod module for Z is a real solution optical zoom, for example. After 2x will 10x digital zoom, and you may get slightly better than you do now with wide-angle lens results as you are starting with the optical magnification of the camera of 56mm.



Lenses six elements

the number of lens elements in the absence of other technical information does not tell us anything about the quality or performance of a lens. It is a specification meaningless in this context (although not in others).

 iphone-7-lightroom.jpg

Raw Compatibility means that Adobe Photoshop Lightroom in iOS is now equal to how it works on Android.

James Martin / CNET

Support for RAW files

the JPEG pictures that are used to receive on an iPhone are compressed and processed, reducing the number of colors in the photos and the bright and dark areas automatically. . That makes them difficult to tweak without exacerbating imperfections (called artifacts)

Image data Raw (or raw) come directly from the sensor – or at least minimally processed – so the you can edit yourself without worsening the artifacts. In theory. The reality is that when dealing with photos from such a small, or even a couple of small sensors sensor, not much can be gained when editing photos in order to improve the exposure or reduce noise to your taste in instead of the taste of the company. You get access to the colors uncompressed, but even then the sensors are not capturing the full range, because they are tiny.

There is too much noise sensor and not enough tonal range for better results in the in-camera processing, except in a limited number of situations. However, access to raw files means that third-party developers of apps for photos can access that data so they can provide better JPEG images and give you control over the settings.

Apple highlighted the issue raw files (RAW) with Adobe Photoshop Lightroom on your new phone; and now you can have feature parity with the Android version. And since RAW files use the format DNG semi-standard, they can be read by hundreds of applications on the desktop and other mobile platforms.



‘Wide color capture’

I’m not sure what this means in practice. Apple has a programming interface for developers of apps conduct a “comprehensive color capture”, so I guess that will have access to more information so that the color range is not compressed so much.



Shallow depth field

This is a variation of a feature that some mirrorless cameras have, and serves to simulate a defocused background and foreground in the focus by using the second camera (or a second shot the case of real cameras) to capture information that allows the camera to understand where things are on the scene with respect to the subject (a depth map). The device then algorithmically isolates the subject from the rest of the image and blurs everything that is said subject. And because the blur is algorithmic rather than optical, is easier to produce areas unfocused and soften unfocused areas (which is generically known as bokeh).

The depth of computational field looks different from optically produced because optical defocus occurs when elements of the scene not share the same focal plane as the subject; that means that the elements sharing the same focal plane can be sharp when you do not want to be (among other things). Sometimes you can get better results computationally. The initial implementation of Apple seems limited, however, as it depends on a specific portrait mode and is only able to produce the effect of the scenes with people because it is based on graphics algorithms to detect faces and bodies ..

A new engine image processing (ISP)

each new sensor built into a camera requires a new engine image processing (ISP), because each combination of sensor and lens (and flash and stabilization and so on) has different characteristics. The fact that it does not provide any new information, and the list of features that allows much more important. In this case, it sounds like Apple refined some of its processing algorithms for better results.



One more thing …

There are a lot of important things still do not know camera on iPhone 7 Plus. For example, what size are the sensors that are in the modules? It is possible that the 56mm module use a smaller sensor, because that’s an easy way to get more gains. But that would not be good

Despite his enthusiasm for the new camera, Apple was quite cautious with their statements -. Something I really appreciate. “What we are saying is that this is the best camera we’ve ever done on a smartphone”. That is indisputable.

We are waiting for comparisons by our experts to see if finally achieved progress against its competitors.

LikeTweet

No comments:

Post a Comment