Law Enforcement And Decoding Of Pictures: How And Why It Is Done?

Law Enforcement And Decoding Of Pictures How And Why It Is Done?
Law Enforcement And Decoding Of Pictures How And Why It Is Done?

We live in a world where everyone is equipped with a camera touting Smartphone.

Photographs are now considered as one of the most powerful mediums of expression. For years, they have been a part of law enforcement, forensic studies; for instance like images of criminals, crime scenes, scene thumbnails, and more.

But with the rapid advancements in technology, it is impossible to say, whether the picture is genuine camera output or is a manipulated version of it. This situation makes authenticity of the image a question; especially when it is related to crime scenes.

Here is how and why the decoding of pictures is actually done.

Why Law Enforcement Officials Decode A Photograph?

A photograph is known to reflect the codes, details, data, and the time during which it was taken. It is a representation of something real, and at the same time, it is something that is created by a photographer.

Furthermore, they are a primary source of information and give the policemen significant information about the historical events.

How Law Enforcement And Forensic Officials Decode A Photograph

  • Through File Format Forensics

When a picture is saved, the image file contains data about the image, which is primarily known as metadata.

According to a 2010 study published by Camera and Imaging Products Association, there are more than 460 metadata tags within an exchangeable file format.

The tags include location data, size, a smaller thumbnail of image, and model of the camera. These tags help to judge the formats- for example by checking that an Android photo appears correctly on an iPhone or any operating system or not.

  • Using Image Pipeline

To decode, inspect and validate the images, analysts have been relying on the concept of image pipeline. The image pipeline is further broken down into six sub categories:

  1. Physics: It includes reflections, lighting, and shadows on the full scene.
  2. Geometry: The geometry part of the picture consists of checking the 3D models, distances between the objects, and vanishing points of the scene.
  3. Image Sensor: Image sensor is all about color filter defects and fixed pattern noise.
  4. Optical: Optical parameter is regarding the aberrations and distortions.
  5. File Format: The file formats include metadata. Compressed files, markers, and thumbnails.
  6. Pixilation: It includes cropping, cloning, scaling, and resaving of an image.
  • By Determining Which Camera Was Used To Take The Photo

Law enforcement officials make use of various statistical methods and sensor pattern noise to determine the device used for capturing the picture. A unique fingerprint is extracted from each image and is compared to how similar they are to each other.

When you compare four or five fingerprints, you get more than 2,000 values. Now, correlate them, and you’ll be able to decipher if they are taken from the same camera or not.

Furthermore, you can also match the image ID field; the time at which photo is taken and other parameters of an image.

  • By Finding The Exact Location Of The Image

Apart from the camera, the law enforcement officials also decode the image by the location. The GPS co-ordinates are available in the image metadata so by placing these co-ordinates into Google Maps, the exact location of the image is displayed. As the location becomes clear, the crime scene also starts to get sorted.

With one of these techniques, the law enforcement officials decode the pictures and decipher the mystery behind the crime scene.