Learn Photography Terms for Beginners
- 1 Digital Camera
- 2 Analog camera
- 3 Light Pixels
- 4 Megapixels
- 5 Bayer Demosaicing / Interpolation
- 6 Full-frame
- 7 APS-C
- 8 Vignetting
- 9 Crop Factor
- 10 Crop sensor
- 11 Focal Length
- 12 Dynamic Range
- 13 Incident Light
- 14 Reflected Light
- 15 Light Meter
- 16 Hand-held Light Meters
- 17 Built-in light Meters
- 18 Matrix metering
- 19 Center-weighted Metering
- 20 Spot metering
- 21 Partial Metering
- 22 Exposure Value
- 23 Exposure Compensation
- 24 Aperture
- 25 Shutter Speed
- 26 Image Stabilization
- 27 ISO
- 28 Noise
- 29 Noise Suppression
- 30 Depth of field
- 31 Auto-focus
- 32 Bokeh
- 33 Selective Focus
- 34 JPEG vs. RAW
- 35 Continuous shooting Drive
A digital camera has an imaging sensor which forms its heart. The imaging sensor is where light rays travelling through the lens barrel gets focused on to in order to form a sharp image. This sensor is what separates a digital camera from an analog one.
An analog camera uses either a film or a photo plate to focus light on. There are no electronic circuitry that transfers captured light onto the image processor. There is no image processor, unlike digital cameras. Such cameras captures light by sensitizing the film medium at the back of the camera. That film is then process in a lab to produce negatives which are then used to produce prints / scanned by a negative scanner. Also there are no built-in Color Filter Array (CFA). The mechanical controls of the camera is what controls the shutter and helps in advancing the film after each shot.
The digital sensor is actually composed of millions of tiny light sensitive photo-diodes. These photo diodes are similar to small pins. On the top of a typical photo diode is a tiny micro-lens that filters out all but one color in the visible spectrum. These micro-lenses are arranged in a particular arrangement or array.
This is true for all light sensors and it is also referred to as a Color Filter Array (CFA). The most popular CFA is the Bayer array. In other words the pixels are capped with tiny micro lenses which filter all wave length of light except for one particular wave length. A standard pixel can come with either a Red, Green or Blue color filter. On any standard sensor there are twice as many Green filters as Red or Blue filters.
We just finished reading about pixels. We have a pretty good idea about what it is. But more than pixels you will hear about the term megapixels often. As a matter of fact megapixels drive not only the sales figures of DSLRs, but all digital cameras!
It is often placed as an argument that larger the number of megapixels, better is the image quality. We will learn very soon that this argument is only partly correct. It seems that manufacturers and stores selling cameras are bent on pushing a camera on the sole USP of megapixel. As if nothing else matters. But what are megapixels and how would you justify that a larger number of these would produce a better image?
Megapixels, simply explained equal to one million pixels. In other words 1 megapixel = 1,000,000 pixels. When you hear that a particular camera has 20 megapixel it means that the camera has 20 million pixels. That’s a lot of pixels! Remember, we have just learnt that each pixel is a photo-diode.
Now, the question is does more megapixel mean better image quality? The answer is both yes and no. Your digital images are formed by the pixels. One tiny dot on a digital image is roughly equivalent to the size of a pixel. More the dots, more are the details. This can be easily demonstrated by taking two images of the exact same scene, shot by two similar cameras with the same lens under the same lighting conditions. Except, that one of the cameras has a 24.1 megapixel sensor and the other one has a 50.6 megapixel sensor.
You will notice that the image that was shot with a 50.6 megapixel camera has way too much detail in it than the one shot with a 24 megapixel sensor. Why? Because the larger number of megapixel allowed the sensor to capture even the tiniest aspects of the image with greater detail.
Another advantage of a higher number of megapixel is that you can print big. The printing industry uses a standard 300 DPI for printing images. A 50.6 megapixel sensor will be able to produce an image of the pixel dimension 8688 x 5792 pixels. On the other hand a 24.1 megapixel sensor will be able to produce an image of the dimension 6000 x 4000 pixels.
Now let’s say that you want to print your vacation images. With the 50.6 megapixel camera you would able to squeeze in a large 20” x 30 print. On the other hand with the 24.1 megapixel camera the largest you would be able to print is 11” x 14”. Needless to say with a larger resolution (larger megapixel count) you would be able to print large without pixilating the prints.
One major disadvantage of higher megapixel is the larger file size. Larger file size means your memory cards would have be of a higher capacity. Smaller capacity memory cards will exhaust much quicker. You will also need a much larger backup drive to store all your RAW and JPEG images.
Plus, a larger file size means additional pressure on the buffer. Buffer speed on your camera determines how suitable it is for shooting sports and other continuous shooting requirements. Larger file size takes more time to process and clear buffer thereafter. Therefore, it has a direct effect on the continuous shooting speed of your camera.
Thus, more megapixels means better images, right? No! More megapixels certainly means a greater amount of detail. But it in no way means that your images are going to be ‘better’. Though the term better is a bit dubious and it has much wider ramifications, we are not going to go into the details.
If you are mainly going to share your images online, there is no reason to go for a larger megapixel camera. If you are going to watch your images on your 4K TV you only need 4096 x 2160 = 8.84 megapixels. For sharing on social media you don’t even need that many megapixel.
A pro photographer who knows what s/he is doing can easily make stunning imagery with a 10 megapixel camera. On the other hand, someone who does not know how to use manual mode will probably end up getting snapshot styled imagery even if s/he is given a 50 megapixel top of the line DSLR.
Bayer Demosaicing / Interpolation
This is a complex processing mechanism that takes the incomplete color information in the form of red, blue and green and then convert that into a complete image with all the color information. Most common sensors use the Bayer Algorithm This algorithm has twice as many Green filters as Red or Blue filters. The process of converting that information into a proper image is known as Demosaicing and is done by the built-in image processor on the camera or can also be done off-camera using any third party image processing software. This process is also referred to as Interpolation.
Full-frame sensors basically mimic the size of the older 35mm film. Full-frame sensors are typically 36mm x 24mm in dimension and therefore the image is roughly the size of what a 35mm film would be. Though in itself the size of the sensor does not mean anything, there are critical advantages to using a larger sensor.
The term APS-C means Advanced Photo System – Classic. In a much easier to understand meaning it means a sensor that is smaller than a full-frame. APS-C sensors are smaller in size compared to full-frame sensors. How small exactly? Well there is no standardized sizes. Different manufacturers have their own sensor sizes. Roughly, they mimic the size of the older film negatives of the same name. These negatives had the size 25.1 x 16.7mm.
For example Nikon’s D7200, an APS-C sensor powered camera, has a sensor size of 23.5mm x 15.6mm. On the other hand Canon’s 7D Mark II, another APS-C sensor powered camera, has a sensor size of 22.4 x 15mm. The sensor size can vary even between two cameras of the same make and segment. For example, the Canon Rebel T6i, which also has an APS-C sensor, but measuring 22.3 x 14.9mm.
The image circle of a lens, as the name suggests, is a round one. However, the sensor at the back of the camera is a rectangular one. This is a classic case of taking a square peg to attempt to fill in a round hole. A lens designed for a particular sensor format will cover almost whole of the frame with the exception of the four corners. This four corners will show some amount of darkening depending on the sensor design and the lens being used. This phenomenon is known as vignetting.
Vignetting tends to be present on most lenses with a little bit of exception. However, vignetting tends to be extreme when shooting with a lens designed for a crop sensor format on a full-frame camera. This is because the image circle of the lens will not cover the whole of the sensor.
Vignetting can be easily corrected, at least when the lens and the sensor are compatible, by using the lens profile correction option in Photoshop and Lightroom.
Crop factor denotes how much a particular sensor is smaller or larger than a 35mm (full-frame) sensor. This is actually expressed as a ratio like 1.5x or 1.6x. Crop factor affects focal length of a lens. Not the actual focal length, but the effective one. Let us understand this is in more detail.
If you multiply the crop factor with the focal length of the lens you will find the new effective focal length. Let’s say that a sensor is marked as having a crop factor of 1.5x (e.g., Nikon APS-C cameras have a crop factor of 1.5x). Now, let’s say that you mount a 50mm lens on it. This lens is designed for the full-frame sensor. The smaller sensor will use only a portion of the image coming through the lens. That happens to be the center part of the image coming through the lens. In effect it would appear that you are using a longer lens. Therefore, the result is identical to what it would be if you were using a 75mm lens.
Any sensor that is smaller than a full-frame sensor is referred to as a crop sensor. A crop sensor crops out the edges of the scene from what would have been captured using a full-frame sensor camera. Resultantly, it would appear that the lens is zoomed in. In other words the lens’ effective focal length gets extended when using with a crop sensor camera.
Focal length has nothing to do with the size of the lens barrel. Focal length denotes the distance between the point where light rays converge and the digital sensor at the back of the camera where the image is finally formed. This distance is measured in mm. Roughly, it is a measurement of the optical center of the lens to the sensor at the back of the camera.
Longer that distance or focal length, the longer a lens can see. In effect that also means smaller the viewing angle of the lens. Shorter that distance, the wider is the viewing angle and the shorter the distance that the lens can see.
In other words a long lens something like a 70 – 200mm telephoto can see a much thinner slice of the scene compared to a wide angle lens something in the range of 14 – 24mm.
Dynamic Range is a term that denotes how many stops of intensities of light a digital sensor can see between the whitest white and the blackest black in a scene. The term is also applied for all types of cameras as well as for other imaging devices with different annotations.
A camera with a higher dynamic rage will produce images with higher contrast levels. A camera with a high dynamic range will also likely to have less problems with color banding or aliasing that we see when photographing certain scenes. This frequently happened with older digital cameras.
This happens because most outdoor scenes have a greater dynamic range than an average camera can record. This results in the camera trying to ‘even’ the scene out and in the process banding appeared in the images.
Modern digital cameras can record in 14 bit RAW which gives them a lot of latency. An incredible amount of data can thus be recorded in a lossless format. This data is then available when you are post-processing the images.
Incident light denotes the light that falls on a subject. The terms Incident Light is relevant because of metering. Incident light metering is the best way to meter a scene. This is because when you measure incident light it does not matter whether the subject or the scene has bright colors or dark ones. Incident light metering is also the faster of the two methods to measure light in a given scene.
Reflected Light denotes light that is reflected off of a surface. Just like Incident light it is relevant because of light metering. Reflected light metering is a poor way to meter a scene because it can skew the metering sensors depending on how bright or dark the scene is. Thus, if the scene is bright / has brighter colors, it is likely to tell a light meter that the scene is bright. On the other hand if the scene has an overbearing amount of dark colors it is likely to tell the light meter that the scene is too dark. Reflected Light metering technique is also the slower of the two methods to meter light.
A Light Meter is a device that measures the amount of light falling on / reflected off of a scene. It basically helps you to set the right exposure values for an ideal exposure of the scene / subject. There are two kinds of light meters – handheld and built-in.
Hand-held Light Meters
A hand-held light meter is an optical device that measures light falling on a scene. It uses, generally, a translucent dome type structure that has a light sensor built inside it. The dome structure is aimed at the general direction of the camera when you need to meter a scene.
The device can also be used to measure reflected light. Hand-held light meters are far more accurate when measuring the correct light value of a scene. This is because they are dependent on the actual incident light and not the light reflected off of a scene.
Built-in light Meters
These are a type of light meters that are built into a camera and depend on the light reflected off of a scene in order to measure the light value of the scene. These type of light meters easily get skewed because depending on the average colors in a scene the amount of brightness measured in the scene varies with it. So, for the same amount of light a scene with darker colors will be metered to be too dark (therefore suggesting you push the exposure) and a scene with brighter colors will be measured as too bright (therefore suggesting that you underexpose the shot).
Matrix metering is a built-in metering mode on all digital cameras. In this mode the whole scene is divided into zones. These zones are individually measured for the amount of brightness and shadows and accordingly the right exposure settings are used that would give an average exposure for the whole of the scene. As an average exposure is given this mode is ideally suitable for the purpose of shooting landscapes and for scenes that have a fairly even lighting situation across the scene. This mode takes into account almost the whole of the scene when metering. However, some amount of preference is given to the AF point that is active. This is why this mode is suitable when you have a subject that is off-center and you are focusing on it.
This mode takes into account the center of the frame. Everything else in the frame is disregarded for the purpose of metering. Thus, if you have a subject that is right about at the middle of the frame, this metering mode is perfect. This mode works for backlit situations as well where the key light is behind the subject and therefore you need to be able to meter only the face. This metering mode will also work in situations where you place your subject intentionally towards the middle of the frame. E.g., flower photography or macro photography.
Spot metering as the name suggests will take into account only a small area of the frame for the purpose of metering a scene. That area is usually 5-10%. In some cameras you can select an AF point and the camera’s metering mechanism will meter the area around that AF point. In other camera systems the spot is not selectable and lies at the center of the frame. The reason why the Spot metering mode is such a useful mode is because it has the inherent ability to meter a very small area. It is therefore possible to aim at something that is neutral (or middle-grey) and be able to make a very tight estimate of what would be an accurate exposure value for the whole f the scene. This is something that all hand-held light meters allow you to do. A reason why these light meters are considered so accurate.
Partial metering is a metering mode that is exclusive to Canon systems only. Nikon systems don’t have this mode. This mode is exactly like the Spot metering mode that we saw above with an exception. That being that the camera meters a slightly larger area of the frame for metering purposes– about 12 – 15%. Partial metering would be suitable for use when you need to photograph something that is a bit larger. But not as large as the area covered by center-weighted metering. So things like flowers shot from a close distance, a house in the middle of a landscape scene are ideal for use with this metering mode.
Exposure value is a concept in photography. It measures the amount of light that the camera collects as a result of a particular combination of shutter speed and aperture. Exposure value is actually a product of the shutter speed and aperture of your camera system. Exposure value is generally denoted as a number. Each number on the scale corresponds to a specific amount of light. Sliding down to the next higher or lower number means you go higher or lower about two stops of exposure. It is possible to arrive at the same EV using different combinations of shutter speed and aperture.
The term exposure compensation is used in conjugation with exposure value. Exposure compensation is all about using an exposure value either over or under of what a light meter suggests is correct for a scene. This is done specifically to push or pull the exposure based on what you think is the right exposure value for the scene. Exposure compensation is frequently used when using the built-in meter of your camera. This is because this is the meter that is likely to get the exposure value incorrect more often. To use exposure compensation press the symbol (+/-) and then turn the command wheel to depending on whether you need to over or under expose the scene.
Aperture is a small hole in the lens that allows light to pass through and enter the camera. Aperture can be enlarged or reduced depending on the requirements of the scene. In other words you can use a large aperture in order to collect a lot of light or a small aperture in order to reduce the amount of light. This obviously depends on what you envision and attempt at capturing.
Aperture is always expressed in f-stops and is written like this – f/2, f/2.8, f/4, f/5.6 and so on. Smaller the number on that scale, larger is the actual size of the aperture and vice versa. That means as you go down the table, aperture will continue to get smaller and smaller. Vice versa as you go up the table. Aperture also controls depth of field.
Shutter speed on the other hand denotes the length of time for which the shutter curtain remains open to collect light. The longer the shutter curtains remain open the more light the camera receives and vice versa. Shutter speed is always referred to as a fraction of a second. The faster is the shutter speed, the smaller is the fraction and vice versa. You will get fractions like 1/100, 1/200, 1/400, 1/800, 1/1600 and so on. As you go down the table shutter speed gets faster and faster, and slower and slower as you go up the table. Shutter speeds at 1 sec and faster are not expressed as fractions. They are expressed in the usual way – 1”, 2” and so on.
A term that is closely related with shutter speed and for god reason too is Image Stabilization. Image Stabilization is a mechanism that compensates for an inadvertent movement of the camera at the precise moment when an image is being made. The task of image stabilization is to ensure that the resulting image does not turn out to be blurry.
You use image stabilization every time you press down the shutter release of your camera without even realizing that you are doing so. Image Stabilization is engaged each time you half-press the shutter release button. That means both acquiring focus as well engaging the image stabilization works using the same button.
Image stabilization mechanism can be of two basic types. Either lens based or camera body based. Nikon and Canon are two major camera makers that produce cameras that have lens based image stabilization system. Pentax, Sony, Olympus and Samsung are camera makes that use body based image stabilization. Each of these manufacturers refer to their brand of image stabilization by different names. But they essentially do the same thing.
Body based image stabilization actually moves the sensor whereas lens based image stabilization moves tiny optics inside the lens. The result is the same. Every time the camera moves while making an image as a result of your hand moving, tiny elements inside the lens, or the sensor itself will move to compensate for that movement. Think of them as counterweights for balancing the weight of a tall building which is swaying in the wind.
There are advantages and disadvantages of both these types of stabilization mechanics. Body based image stabilization makes all compatible lenses stabilized by default. On the other hand manufacturers with lens based image stabilization produce two versions of their popular lenses. One with image stabilization and one without. Lenses with image stabilization costs more.
The argument in favor of body based image stabilization is that it is a better technique. This is because in order to get the full benefit of image stabilization you don’t have to buy special stabilized version of a lens. The argument against is that lens based image stabilization allows you to pay for exactly what you choose to use. Meaning, if you don’t need image stabilization you don’t have to pay extra for that. You can easily get a non-stabilized version of a lens and shoot to your heart’s fill.
A major advantage of lens based image stabilization is that when stabilization kicks in you can see the effect through the viewfinder. That way you don’t ‘hunt’ around for the subject when using lens based image stabilization systems. Composing is a lot easier that way.
On the other hand when you look through the viewfinder of a body based camera, the effects of image stabilization is not immediately seen. It will still do what it is supposed to do but composing through the viewfinder isn’t that easy.
Image Stabilization comes in different varieties. Let’s look at them.
There are different modes of image stabilization. The most important of them is the one that we use on an everyday basis – the plain vanilla mode. This mode simply ‘stabilizes’ any inadvertent movement of the camera while an image is being made.
To activate image stabilization you will need to gently half press the shutter release button. Immediately, the stabilization option will kick in and then continue to stabilize the image till you press down the shutter release completely to complete the exposure.
The second mode is slightly different. It engages exactly the same way as the first mode though. But in this mode only any perpendicular movements are countered for. Meaning when shooting if your hands move in an up and down motion image stabilization will kick in. If your hands move horizontally, however, image stabilization will not kick in.
This is mode is designed for the specific purpose of panning. Panning denotes moving your camera in a tracking motion following a subject that is moving. This mode is ideal for photographing sprinters, soccer players, wildlife moving in a straight line left to right of the frame (or right to left as the case may be) and so on. By not engaging image stabilization for horizontal movement of the camera, when you might be actually moving the camera for panning, this mode opens up a new dimension for photographing with a stabilized lens.
There is a yet another mode. A third mode which is designed for a different type of photography. In this mode image stabilization does not get engaged until you press the shutter release fully. By not engaging image stabilization till the very last second you get to see an actual depiction of the action in front you. This mode is ideal for subjects that are moving erratically. Such as a bunch of kids playing in the yard a humming bird, even a squirrel perched on a tree.
The letters ISO pronounced in unison isn’t an acronym, though it is sometimes used as one to describe the International Organization of Standardization. The closest similarity with the word ISO is ASA, another term that was used for the exact same purpose, to demonstrate the sensitivity of the imaging medium. Back in the days of analog photography that medium was film and these days it is the digital sensor.
Why ISO is important? ISO is closely linked with making images in low light. Low light is a situation when you would be forced to slow down the shutter or increase the aperture in order to capture a lot of light.
You might use a flash in such situations, but a flash isn’t always necessary. There are times when a flash can do more harm than good. A flash is a powerful beam of light which is very difficult to control. You don’t even have to use it on-camera to ruin a photo. Even an off-camera flash can be difficult to work with.
Plus, there are instances when tinkering with either aperture or shutter speed is not feasible. E.g., when you are shooting a portrait and you need to blur out the background and at the same time would want to shoot a sharp image with a fast shutter speed.
Basically, you need a way to be able to continue shooting with the same aperture and shutter speed, produce the same depth of field you originally went out to shoot and produce the same effect with the desired shutter speed. These are the times when you need to tinker the ISO.
ISO is expressed in numbers. So, you have numbers like 50, 100, 200, 400, 800, 1600 and so on. These are also expressed as stops. Note how the numbers on the scale doubles up at each stop. Please note ISO setting do not affect the amount of light that is collected by the sensor. It only affects what happens after that light is collected. That is why ISO is never considered as a part of Exposure Value.
ISO simplify amplifies the light signal. When the light signal is amplified, it accomplishes the same task as increasing the aperture size or dragging the shutter would have done.
Noise here refers to digital noise. You would be wondering what does noise got to do with digital photography right? Well noise is closely connected with ISO. As you increase ISO you will probably notice the effects of noise. You have probably seen noise in your own images without ever realizing why they occur.
Noise refer to the physical black and white dots that appear in a photo especially when you shoot at a higher ISO. If you shoot low light images quite a lot you should have come across. Look closely at the shadow areas of your images. Can you see the white and black specks? Those are noise. More specifically digital noise.
Digital noise can be best compared with static in sound transmission. Just like when you transmit sound between two points some amount of static is inevitable, digital noise too happens when light signals are captured by the sensor.
Digital noise is best seen when you shoot images in low light with a higher ISO number. These speckles or grains are seen easily around the shadow areas, because these are the areas where the sensor will try harder to ‘bring up’ the exposure.
The major effects of noise in image are the lack of detail and loss in dynamic range. Images shot at lower ISO will have a lot of detail. They also have good dynamic range (elaborated by greater contrast). When you choose a higher ISO number grains will reduce detail, introduce artefacts and you will lose a lot of detail in the process.
A common misconception about digital noise is that it appears only on images which are shot at a higher ISO. This is wrong. Digital noise is present in every image shot at every ISO. Even at ISO 100. However, at ISO 100 the amount of noise is very negligible compared to an image that was shot in low light and at ISO 3200.
The process of cancelling the noise present in an image is known as noise cancellation or noise suppression. There are many ways in which you can cancel noise. Some are done in-camera using built-in software. Others are done after the image has been downloaded on to a computer.
All contemporary cameras come with a built-in noise-cancellation application. The technique involves identifying noise and then applying a special algorithm that introduces a slight bit of blurring to reduce noise. This will invariably result in the image appearing plasticky.
The same noise suppression can be done in a dedicated image editing software. We would recommend using a dedicated image editing software, than using the built-in image processor of your camera (i.e., shoot in RAW instead of JPEG). Additionally, turn off the default noise suppression feature of you camera.
Shoot in RAW and import your images to your computer in order to do your own image processing including noise suppression manually. Photoshop and Lightroom are two very popular software for noise suppression. But there are others like TopazLabs DeNoise. The latter being a dedicated noise reduction suite
Depth of field
When you look at an image you can tell where the photographer had focused on. This is the area of the image that will be very sharp. The focus point was placed here. As you look beyond that sharpest area, you will notice that the image appears to be increasingly out of focus. The further you look away from the point of focus, the more it seems to be out of focus.
Having said that, even though a large part of the image is out of focus, some of it is still acceptably in focus. This acceptable amount of focus is known as depth of field. Smaller the aperture, larger is that acceptable area of focus. In other words larger depth of field. Larger the aperture, smaller is the acceptable area of focus. In other words smaller depth of field.
There are other parameters as well that also impact depth of field. That includes, the size of the sensor, the focal length of the lens, the subject to background distance and so on.
The technology of auto-focusing wasn’t around forever. Lenses were always focused manually using various different technologies. To describe auto-focusing one could say that it is the process where the camera / lens uses a built-in motor and sensor that aligns the focusing elements automatically in order to being the subject in sharp focus. The result will be a sharp image produced on the image plane.
Auto-focus is basically a time-saving technology. It saves critical moments trying to focus on something at a distance and instead produce an image in a shorter time frame by leaving the focusing task to the camera.
The technology of auto-focusing has evolved over the years from something which emitted sound waves (in Polaroid systems) in order to lock in focus, to the modern dual-pixel CMOS AF mechanism we find in latest Canon systems which looks for contrast to lock focus. The purpose of all of these technologies is the same – to bring a sharp focus of the subject on to the image plane.
The Japanese word bokeh is used in photography to denote the quality of the out of focus effect of your image and not just the out of focus effect alone. It is sometimes incorrectly referred to the blur that comes from using a wide open aperture. To explain it in an even simpler way, blur is not equal to bokeh. On the other hand bokeh does equal to blur.
Now, quality denotes the smoothness of the out of focus effect. There are certain aspects that determine the quality of the out of focus effect. The primary factor being the number of aperture diaphragm blades. The more the number of aperture diaphragm blades, the rounder and smoother the blur or out of focus effect will be.
Another factor that weigh in on bokeh is how wide the aperture can open up to. For lenses that have a small aperture (for example a kit lens with a maximum aperture of f/3.5) bokeh wouldn’t be that nice. On the other hand for an 85mm f/1.8 bokeh would be nice soft, rounded and very smooth.
There are many practical applications of bokeh. The most common application being to isolate a subject from its foreground and background. Portrait photography is one area where this is used quite often. Another area is macro and product photography. However, this is not an exhaustive list.
Selective focusing is the art of focusing on a specific area of the image. The purpose of this is to draw the attention of the viewer to a specific part / aspect of an image. Selective focusing uses several techniques. But the principle element of selective focusing is to use a wide aperture and utilize the shallow depth of field that creates.
A good example of selective focusing would be when you use a lens such as a 50mm f/1.4 or an 85mm f/1.8 or a 100mm f/2.8 macro to shoot something wide open. Only a small area of the frame, coinciding with the active AF point, is in sharp focus. Everything else is at varying degrees of sharpness.
This technique is often used in areas of wedding photography, portraitures and macro photography.
JPEG vs. RAW
Very simply, RAW is an untouched, un-manipulated format which has all the details of an image safely preserved and waiting to be adjusted manually by you. JPEG, on the other hand is a processed format. The original RAW image has been processed, either by the camera or an external application. The result is a file which has little to no scope of further alterations without significantly destroying the quality of that image.
RAW basically retains the original information as captured by the sensor. JPEG on the other hand compresses the information to improve the file size and to cull non-essential information. It is important to know that all cameras actually shoot in RAW by default. It is only after the image is shot is the camera processes it in JPEG. In some cameras you can prevent your camera from converting a RAW image to JPEG by switching to RAW mode.
Why would you shoot in RAW? Well to start off RAW is a lossless format. But is that all there is to it? As a matter of fact there is a lot more to it. RAW retains a larger amount of detail in both the shadow and the highlight regions. This is known as dynamic range. RAW images far outperform JPEG images in dynamic range.
How much to be precise? JPEG images can retain 256 levels of brightness between the brightest bright and the darkest dark regions in an image. It is also referred to as 8 bit. On the other hand RAW can retain from 4096 to 16384 levels of brightness. That is 12 to 14 bit depending on the camera make and model. The most immediate advantage of this higher level if information is that the images don’t have banding.
Since the original information is retained, you also have the opportunity to adjust things like white balance and exposure without destroying the image. This is the reason why professionals choose the RAW format because they can then choose the way they want to process their images as per their requirement.
Also, and this one is relevant for printing, you can control the color space of the image better by choosing the appropriate one depending on whether you are going to publish the images online or use them for print at output.
Continuous shooting Drive
Usually, when you press down the shutter button the shutter curtain opens to make the exposure and then resets for the next shot. This process happens only once. Continuous shooting drive denotes the ability to fire a number of frames one after the other. Nikon systems have three shooting modes, S (Single shot), CL (Continuous Low) and CH (Continuous High).
This depends on a number of factors. One of them is how fast the image processor is. The speed of the image processor denotes how quickly it can finish processing one frame and transfer it to the storage for the next frame. Modern high speed image processors can handle anything between 7 to 14 frames per second.
Another determining factor is whether you are shooting with continuous Auto Exposure and Auto Focusing or have locked both focus and exposure before the first frame is fired. In the first scenario the camera will acquire focus and adjust exposure before each frame. This slows down the overall speed. In the second scenario focus and exposure is locked before the first frame is fired. That means the camera gains some extra seconds to fire in that many extra frames.
Buffer speed, meaning the capability of the camera to store photos as they are shot and prior to being processed is also a determining factor. Consider buffer as the internal pipeline of the camera. Greater the capacity, longer the camera can keep shooting. When the buffer overruns the camera slows down. You will probably get one frame per second from down to 7 or 8 per second if you keep shooting.
Another factor that determines the continuous shooting speed is the memory card you have installed. A high speed memory card will mean faster continuous shooting speed compared to a slower memory card.
Continuous shooting speed will also depend on the file size. RAW files have a larger file size compared to JPEGs which are lighter. Thus, if you are shooting RAW files speed will be less compared to when you are shooting in JPEG.
Latest posts by Rajib (see all)
- Learn Photography Terms for Beginners - June 16, 2017
- A review of the Nikon COOLPIX B700 - May 28, 2017
- A Review of the Sony Alpha a6300 Mirrorless Interchangeable Lens Camera - May 24, 2017