UKC

RAW or JPEG

New Topic
Please Register as a New User in order to reply to this topic.
 PaulW 19 Apr 2024

I have loved the rabbit hole journey that the wedding photo thread took us down. UKC at it's opinionated best.

I haven't played around with photos for years but did a little photoshopping previously. Is there much you can do with a RAW file that you can't do with an unedited JPEG? As I remember the editing software was very powerful and I'm sure has got more so over the years. 

1
 Jon Read 19 Apr 2024
In reply to PaulW:

Yes, lots, particularly at the very dark and very bight ends of the image.

 Robert Durran 19 Apr 2024
In reply to PaulW:

I've heard that it is possible to make a bride (or even a bridesmaid) look a lot more beautiful than they really are.

8
 Patrick1 19 Apr 2024
In reply to PaulW:

The key is to think about what a RAW file represents, in other words the "raw" output from the sensor. To turn that into a JPEG some decisions need to be made about things like contrast, sharpening, noise reduction etc, and many of those decisions are effectively irreversible once the image has been turned into a JPEG. For example, as is mentioned above, the contrast is often increased to lead to a "punchier" image, but this will be at the expense of some of the brightest and darkest areas of the picture. Cameras these days do a pretty good job of making those decisions, but they can't actually know what you intended - so if you think you might make significantly different decisions about these parameters, that's when its worth going back to the RAW file.

 wintertree 19 Apr 2024
In reply to PaulW:

Pointless Bugbear Alert - a modern “raw” is nowhere near the raw sensor output.  Lots of pixel and column non uniformity corrections etc.  That’s fit for purpose but it’s not really raw.  A genuine raw is not pretty.

If your camera has more than 8-bits of dynamic range, using JPEG throws that away limiting what you can recover in post processing in a less-than-ideally exposed scene.  There’s HEIF for lossy compression of high dynamic range data.

The only times I find myself using RAWs are for moon and sunspot photos and for professional scientific work where I’m looking at really small, sharp features where the JPEG/HEIF compression is noticeable.  If I was worried about the perfect exposure I’d rather take exposure bracketed HDR images and select the best rather than muck on extensively with RAWs.  But other than scientific work I’m not a professional photographer!

Edit: I agree with Patrick’s point about how irreversible decisions are made in-camera for making JPEGs, although modern cameras are very good at this.

Post edited at 17:56
3
 Dan Arkle 19 Apr 2024
In reply to PaulW:

This photo from Adobe illustrates the difference well. 

To the left is a Jpeg, there is little detail in the shadows, this could be brightened in software, but not by much. 

In contrast, the one on the right was taken in raw and edited. A fairly quick edit can make those shadows brighter, bringing out all that detail and colour. 


 wintertree 19 Apr 2024
In reply to Dan Arkle:

A very quick exposure boost in “iOS Photos” suggests there’s plenty of detail in the shadows in the JPEG!  You should post it full res and see what good photo tuning types can do with it.  Not as good as the RAW but better than the unedited example!

Comparing an unedited JPEG with an edited RAW is mixing two different things.

Post edited at 18:19
1
 Marek 19 Apr 2024
In reply to Jon Read:

> Yes, lots, particularly at the very dark and very bight ends of the image.

And colour balance. A JPEG has had the camera WB setting, camera profile and sRGB profile applied and there's no going back (with any accuracy).

 wert 19 Apr 2024
In reply to Robert Durran:

I’ve heard that it is possible to make a groom (or even a best man) look a lot more beautiful than they really are.

 craig h 19 Apr 2024
In reply to PaulW:

There is so much more you can do, glad when I start digital photography I more often than not too both. It did take many years until I stopped just trying to edit and post jpegs and learnt about how to edit raw files.

Still have a back catalogue I go back to and very rarely find that the edited jpeg from the time comes close to what an edit on the raw file results in.

For a quick click and post image though I'm sure most folk won't notice as they only look at it briefly, a reason sports and news photographers submit jpgs to their agencies as they are quicker to upload as limited editing, but if it was for a magazine or an article they would take the time with a jpeg.

 timparkin 20 Apr 2024
In reply to PaulW:

Yes there is a huge amount extra in a RAW file. a jpeg is optimised by removing all the data you don't need to present the image as is, it purposefully throws away data. 

Also, if you ever want to enter a photo competition, most will want to see a raw as proof of a 'natural' image. 

If you like the look of an in camera jpg - take jpg and raw.

 Alkis 20 Apr 2024
In reply to PaulW:

Yes, the RAW file contains the full dynamic range captured by the sensor.

In reply to PaulW:

Talking of RAW files, who here has experience of shooting RAW files on an iPhone, using the ProCamera or Halide apps (which seem to be the main contenders)?  I have an iPhone 13. Recommendations/comments would be welcome, before I start shelling out any money ...

 AllanMac 20 Apr 2024
In reply to PaulW:

> I haven't played around with photos for years but did a little photoshopping previously. Is there much you can do with a RAW file that you can't do with an unedited JPEG? As I remember the editing software was very powerful and I'm sure has got more so over the years.

Very much so. The 'floor to ceiling' density range in a RAW file is significantly deeper than that from a compressed jpeg. It's similar to the differences between old school negative and transparency films in that respect. There was leeway for dodging and burning when printing on an enlarger from a negative - not as much if at all when printing from a transparency.

I sometimes use jpegs (Fujifilm jpegs are among the best, with excellent film simulations), but more often than not I'm not that keen on having in-camera tech deciding what the editing parameters are in an image, especially in complex lighting conditions. A photo should be as accurate as possible to how I saw in the scene, rather than how the camera saw it.

Controlling colour balance, sharpening and density range myself from a RAW file, means I can more accurately replicate what I actually perceived at the time of taking the shot.

 Robert Durran 20 Apr 2024
In reply to AllanMac

> I sometimes use jpegs (Fujifilm jpegs are among the best, with excellent film simulations), but more often than not I'm not that keen on having in-camera tech deciding what the editing parameters are in an image, especially in complex lighting conditions.  

One of the good things about Fuji film simulations is that you can get them from the RAWS with one click in processing. I usually use the standard slide one as my starting point.

 SouthernSteve 20 Apr 2024
In reply to Gordon Stainforth:

I have a 13 Pro Max - definitely worth shooting RAW - use the settings so that you don't have to choose it every time. Large files and not a patch on a proper camera in low light – but it's in your pocket and always available which is so good.

In reply to SouthernSteve:

Mine is the iPhone 13 and not the 13 Pro, unfortunately ...

 Frank R. 21 Apr 2024
In reply to Gordon Stainforth:

Smartphones are an interesting departure from the old adage that raw is better than jpeg.

Because of computational photography. A modern smartphone sensor can take lots of partial photos in just a few or tens of milliseconds – something no larger camera sensor can ever do, due to slower readout speeds due to physics and engineering constraints, even if the much larger camera sensor is usually much better – and algorithmically combine them in some really clever ways to get a much higher dynamic range mapped into a final photo with both details in the deepest shadows and the brightest highlights. Something that wouldn't be possible from just a single raw capture of the very same sensor.

Think even the oldest iPhones HDR mode, which was superior to its raw capture, at least in terms of HDR mapping of high contrast scenes. The HDR jpegs simply captured much more of highlights (skies) and shadows than ever possible for the physical sensor, given its very small size.

That obviously won't likely ever make a phone superior to a physically much larger sensor camera, as basic physics still pretty much apply (bigger is better), but it can definitely make a difference on the very same device – on the oldest iPhone that offered both jpeg HDR mapping and raw capture (5s I think, at least using 3rd party apps for the raw files?), the jpegs were often superior even to carefully edited raw files, at least in the terms of exposure (white balance is another thing).

Of course, some super‑expensive phones now offer the same computational HDR modes in some kind of raw file modes as well, fuzzing the boundaries even more, but that's still not always the norm. 

Post edited at 00:33
In reply to Frank R.:

Thanks for your detailed and fascinating account, Frank. I didn’t know a fraction of that.

 Marek 21 Apr 2024
In reply to Frank R.:

Sorry, but there's a lot of 'marketing and pseudo-engineering' in that.

(a) The issues with JPEGs are have little to do with the details of the sensor - it's limitation in the bit depth and compression encoding which are the same whether it's a phone or Hubble. Converting the RAW info into a JPEG is basically a decision about what data you can get away with throwing away (~90%) and what you keep (~10%). With in-camera JPEG conversion that decision is taken by the camera designer based on 'average' usage of that camera in the expected population. If you download RAW, then decision is yours (the photographers).

(b) Sensor readout speed isn't particularly dependant on the well-depth of the sensor, so a 50Mpx camera sensor can be read in about the same time as a 50Mpx phone sensor. In fact the ultimate limitation is probably heat generated by the readout process which will cause a higher temperature rise in a physically smaller (phone) sensor (and hence more noise). In practice there's little point in too fast a speed since it's exposure time (time to collect the photons) that determines how fast you can take pictures. Yes, there are sensor that have variable well-depth pixels, but the benefits are doubtful (it was tried in cameras a while ago). Also phones only have an electronic shutter (rather than a mechanical one like cameras) which limits how much flexibility they have in readout speed (as far as I'm aware).

(c) Just about every modern camera does in-camera HDR generation (not just phones) from multiple exposures, but as in (a) above, that just means than the camera designer decides which 99% of your data to throw away when creating a JPEG. I'd rather be in control of that decision (as the photographer).

(c) You are correct than 'computational photography' can make new phones (and cameras) output visually more impressive JPEGs than old ones. However a lot of the 'value' of CP (I use the word with trepidation) relates to other functions such as AI denoising/sharpening (aka inventing detail that wasn't captured), removing stuff you don't want and generally changing the image from 'what is there' to 'what the designer thinks should be there' or 'what you would like there' (aka fiction). Nowt wrong with that in the right context, but that not what most 'photographers' want *.

(d) You are correct in that some phones/cameras RAW data is pre-processed at the pixel level (Nikon and Sony have been particularly guilty of that) and the result has generally been derision and lost sales. Photographers who benefit from RAW generally want pure-RAW (i.e., count how many photons hit that site) not some ambiguously modified data.

Just for background, I used to be physicist and silicon chip designer and I have written RAW and JPEG software as well as post-processing software. I know about silicon chip limitations and data processing. And (unfortunately) about marketing.

* One of my cameras actually has really good AI-based target recognition (e.g., birds) and focussing. I've been in situation where I know there's a bird in a tree but I can't see where. The camera found it immediately I pointed it at the tree! 

Post edited at 08:35
 wintertree 21 Apr 2024
In reply to Marek:

> Sensor readout speed isn't particularly dependant on the well-depth of the sensor,

Analog-to-digital conversion speed absolutely depends upon the bit depth of the conversion.  This doesn’t have to be linked to well depth but unless you want you your 10- or 12 image to have anything but noise in the bits above 8, it’s going to need more well depth.  (I’ve worked with digital imaging as a tool in labs for 20 years and have built and sold a very high speed camera in the past).

> (c) Just about every modern camera does in-camera HDR generation (not just phones) from multiple exposures

dSLRs and their mirrorless replacements don’t.  You can turn on exposure bracketing but it’s not automatic, and I don’t know if many composite in camera see (mine doesn’t) vs leaving it to post.

>  In practice there's little point in too fast a speed since it's exposure time (time to collect the photons) that determines how fast you can take pictures. Yes, there are sensor that have variable well-depth pixels, but the benefits are doubtful (it was tried in cameras a while ago). Also phones only have an electronic shutter (rather than a mechanical one like cameras)

With modern sensor SNRs and a wealth of F/1.2 and F/1.4 lenses out there, for many applications photons are not the limit any more.  Staggering changes in the last 15 years in dSLRs etc.

Is not the slow readout speed what defines the limit of an electronic shutter?  Global reset, exposure, rolling readout being the electronic shutter pattern?  Once the readout time becomes an appreciable fraction of the exposure time, you have a non-uniform exposure.

For my EOS R6, electronic shutter isn’t fast enough with 10-bit readout as a shutter so HDR images are only available with the mechanical shutter.  This says to me that speed of readout (really of the ADC part) and accessible well depth are coupled - at least when you go to big sensors with big pixels and meaningfully large well depths…

I’ve used CCDs with an instantaneous electronic shutter where charge is shunted in to interlaced dark rows in one fell swoop and then they’re read out during the next exposure, with microlenses putting all the light in to the other half of the pixels.  There is no similarly elegant solution for CMOS sensors that I’m aware of.

> And (unfortunately) about marketing.

50 mpix on a smart phone!  Good way to sell more storage I suppose…

Post edited at 09:37
 Marek 21 Apr 2024
In reply to wintertree:

> dSLRs and their mirrorless replacements don’t.  You can turn on exposure bracketing but it’s not automatic, and I don’t know if many composite in camera see (mine doesn’t) vs leaving it to post.

All my Panasonic (mirrorless) cameras have in-camera HDR as well as bracketing.

> With modern sensor SNRs and a wealth of F/1.2 and F/1.4 lenses out there, for many applications photons are not the limit any more.  Staggering changes in the last 15 years in dSLRs etc.

Yes and no. Photons count via an F1.2 FF lens is not the same as for a small sensor for a given FoV. So yes for big sensors, less so for terminally small ones (phones).

> For my EOS R6, electronic shutter isn’t fast enough with 10-bit readout as a shutter so HDR images are only available with the mechanical shutter.

Must be camera design dependant. In my Panasonics, the electronic shutter is faster than the mechanical one. 

> I’ve used CCDs with an instantaneous electronic shutter where charge is shunted in to interlaced dark rows in one fell swoop and then they’re read out during the next exposure, with microlenses putting all the light in to the other half of the pixels.  There is no similarly elegant solution for CMOS sensors that I’m aware of.

Indeed. But there's usually a trade-off associated with serious (as above) microlensing (in the optical lens design).

Post edited at 10:05
 Marek 21 Apr 2024
In reply to Marek:

> Yes and no. Photons count via an F1.2 FF lens is not the same as for a small sensor for a given FoV. So yes for big sensors, less so for terminally small ones (phones).

Which reminds me of another thing about small sensors I don't like: Very little control of DoF. Yes, you could argue that DoF is an 'aberration' of old fashion cameras, but I like to control it my way. And no, artificially blurring bits of the resulting image is not the same!

 Frank R. 21 Apr 2024
In reply to Marek:

Not wanting to go into the technicalities much (you seemed to be hung on specific technicalities a bit, perhaps due to your specific background, so you might have missed my overall point), but please still correct me if wrong.

The point – contrary to general (and generally valid!) photography opinions, a HDR jpeg/heif from e.g. iPhone SE2 can indeed have higher apparent dynamic range than a single shot raw from the same iPhone SE2. Both will get tonemapped to SDR anyway (the former by software, the latter by the user), but even if you are relinquishing the control over tonemapping to the "smart" HDR of the phone, you can't make up details that are blown out in the raw anyway. 

Readout speed matters for HDR integration because for some totally irrelevant reason, modern phones do seem to have much faster line by line readout speeds compared to most big cameras, even if the cameras have much better sensors in the first place (any full frame sensor will still run circles around a smartphone sensor).

And a faster readout speed helps in that, because your subjects won't move that much when taking a quick short burst – especially with tricks like taking only the shortest exposures, throwing out the lowest noisy bits and integrating the rest, unlike the traditional HDR on your Panasonic that goes from e.g. 1/125s to 1/2000s. That's 5 frames, but if I can take 50 1/2000s frames in the same amount of time and integrate all those, that's a bit different, even if I throw bits of them away.

As in the latest iPhone takes around 5ms* to read it, IIRC, while a mid‑price full‑frame digital will take around 5‑10x longer. Why is that? I can only speculate, but it might be anything from the overall cost ratios (a smartphone sensor's bare silicon is much cheaper than a 35mm sensor's bare silicon, so maybe more margin to put more circuitry in?), or perhaps for much more technical reasons (bit depths and throwing out the lower bits in a HDR burst so faster pipelines compared to full 16‑bit or whatever, more expensive ADCs required for the bigger ones, whatever?). Any ideas?

*: going by their commonly quoted rolling shutter figures, not the actual 100% correct readout time, so just an approximation, but comparable.

Anyway, that's technicalities better suited for SigGraph or some technical forum, even if intriguing. And please correct me for any mistakes I might have made.

TL;DR: The point was simply that raw is not always better in some smartphones, as the "smart" jpeg/heif made by processing an invisible burst could still show detail in the shadows and highlights simply unobtainable from a single raw from the same smartphone.

But for bigger cameras, raw is usually universally better if you want to tinker with the exposure or developing yourself, obviously.

Which gets more complicated by marketing, as you rightly note (there is a special level of hell reserved for those people), as in e.g. Apple offering "ProRAW" on only some phones, which has some of their SmartHDR benefits but offers more latitude in processing them manually yourself, while keeping the non‑"Pro" phones from using the same, even though they might likely be perfectly technically capable of using it as well.

Obviously, there is still a lot of marketing BS around CP and some total blunders, like Samsung's infamous "You have a moon in your picture? Well, let's paste a NASA telescope picture over your own moon picture!"

Post edited at 20:35
 Frank R. 21 Apr 2024
In reply to Marek:

> Which reminds me of another thing about small sensors I don't like: Very little control of DoF. Yes, you could argue that DoF is an 'aberration' of old fashion cameras, but I like to control it my way. And no, artificially blurring bits of the resulting image is not the same!

I'd guess 1" sensors with a fast lens seem to be around the sweet spot for smartphones. Yes, still nowhere near the control over DoF as in any m4/3 or even a full frame sensor, but at least in the range where a physically adjustable aperture starts to make some sense. At least it offers a modicum of shallow DoF compared to the smaller smartphone ones, while still being small enough to physically fit in a phone. It will still get diffraction‑limited at the smaller openings, but that's not such a problem anyway, as who cares about diffraction apart from pixel peepers and archival technicians.

In reply to Frank R.:

There’s a fun, mock shallow depth of field ‘portrait’ setting on the iPhone which throws the background out of focus rather effectively. Overall, the convenience, high quality of the results, and startlingly good low light capabilities make it a joy to use  - and concentrate, unobstrusively, on the subject matter. By FAR the biggest downside is not having a proper viewfinder - very difficult to see the screen properly in bright daylight/sunny conditions. Yet, there’s something about point and shoot with it that works remarkably well. The video (and the way it smoothes out hand movements) is also spectacularly good, though very memory-intensive.

I say all this as one who spent years with traditional cameras (35mm, 120 and 5x4 plate cameras), manfrotto tripods and multiple expensive/heavy lenses. Spotmeters, etc.

Post edited at 21:12
 wintertree 21 Apr 2024
In reply to Frank R.:

> As in the latest iPhone takes around 5ms* to read it, IIRC, while a mid‑price full‑frame digital will take around 5‑10x longer. Why is that?

Physics.  The larger a sensor is, the longer it takes for information from a pixel (electrical charge) to get to the readout amplifier because the bigger the sensor, the further the average distance from the amplifier to a pixel.  Main reasons:

  • Capacitance of readout bus lines increasing with their length - ie with sensor size.  The rise time on the bus is slowed with increasing capacitance.
  • Speed of electricity meaning it takes charge longer to cross larger sensors.

Capacitance is the limiting issue I think making up about 80% of the time on a 24 mm x 36 mm sensor, but both get worse with sensor size.

The obvious solution is to have many readout amplifiers but this adds complexity that cuts in to the light collecting ability.  Notable the EOS R3 has an 8.3 ms frame period with electronic shutter on a 24 x 36 mm sensor.  They’ve back thinned the sensor wafer and then bonded a separate readout wafer to the back of it; this presumably is being used to give multiple readout amplifiers without the limits from having all those switching busses in the imaging layer.

 Graeme G 22 Apr 2024
In reply to PaulW:

Apologies if already stated, but what I love editing RAW is the ability to alter different sections or components of a photo. Which I don’t believe is available on JPEG?

 FactorXXX 22 Apr 2024
In reply to Graeme G:

> Apologies if already stated, but what I love editing RAW is the ability to alter different sections or components of a photo. Which I don’t believe is available on JPEG?

You can use layers in photoshop via the lasso tool, etc. to edit different sections of the photo.
In Lightroom or Camera Raw in Photoshop, you can also adjust pretty much as per RAW, but with the limitations of the data extracted in the JPEG. e.g. you can adjust Highlights and Shadows, etc. with sliders, but you probably won't have as much leeway with regards to actual detail at the far ends of the histogram.


New Topic
Please Register as a New User in order to reply to this topic.
Loading Notifications...