An interesting article, but it seems like quite an oversight to not even mention dithering techniques, which in my opinion give much better results.
I've done some development work in Pico-8, and some time ago I wrote a plugin for the Aseprite pixel art editor to convert an arbitrary image into the Pico-8 palette using Floyd-Steinberg dithering[0]
I ran their example image through it, and personally I think the results it gives were the best of the bunch https://imgur.com/a/O6YN8S2
They don't explicitly state it in the article that I can see, but the PICO-8 is 128x128, and it appears that their output images were constrained to that. Your dithered images appear to be much higher resolution. I'd be curious what dithering would look like at 128x128!
Dithering is used quite frequently in PICO-8 projects at the “native” (128x128) resolution. Here’s an example from a few years ago: https://www.lexaloffle.com/bbs/?pid=110273#p
Direct link to the image, though you may have to fetch it through a non-browser to avoid it redirecting back to the stupid HTML: https://i.imgur.com/y93naNw.png
(I don’t know how it works for others, but it has always been atrocious for me. Their server is over 200ms away, and even with uBlock Origin blocking ten different trackers it takes fully 35 seconds before it even begins to load the actual image, and the experience once it’s finished is significantly worse than just navigating directly to the image anyway. Tried it in Chromium a couple of times, 55 and 45 seconds. Seriously, imgur is so bad. Maybe it ain’t so bad in the USA, I don’t know, but in Australia and in India it’s appallingly bad. You used to be able to open the image URLs directly, but some years ago they started redirecting to the HTML in general if not loading as a subresource or maybe something about accept headers; curl will still get it directly.)
Dithering is sort of like having the ability to "blend" any two colors of your palette (possibly even more than any two, if you use it well), so instead of being a 16-color pallete, it's like working with a 16+15+14+13+12+...=136-color pallete. It's a drastic difference (at the cost of graininess, of course).
Dithering is still more important than is commonly known, even with 24-bit "true color". For example, imagine that you had a gradient that goes from white to black across a 1920x1080 monitor. 24-bit color means you only have 256 levels of color, so a naive gradient implementation will result in 256 discrete bands of different grays, each about 8 pixels wide (about as wide as this "w" character).
You might not thing that you'd notice that, but it looks surprisingly bad. Your eyes would immediately notice that there are "stripes" of solid gray instead of a smooth continuum. But if you apply dithering, your eyes won't be able to notice (at least not easily). It will all look smooth again.
In a situation like this, I like to use "blue noise" dithering, but there are scores of dithering methods to choose from.
Something I've noticed from automatic palette mappings is that they tend to produce large blocks of gray that a human artist would never consider. You can see it in the water for most mappings in this sample, and even some grayish-brown grass for sRGB. It makes sense mathematically, since gray is the "average" color, and pixel art palettes are typically much more saturated than the average colors in a 24-bit RGB image. It looks ugly regardless.
CAM16-UCS looks the best because it avoids this. It gives us peach-and-pink water that matches the "feel" of the original image better. I wonder if it's designed to saturate the image to match the palette?
The PICO-8 actually has 32 colors from which we can choose any 16. I understand that making use of the default palette is the article's intent; I'm just thinking aloud.
If one were wanting to render an image on the PICO-8 itself, the ideal algorithm would select the best 16 colors from the full 32-color palette which, when dithered, produce the most perceptually accurate version of the original image in 128x128 pixels. Were I a smarter man I would create this, but alas.
I was looking forward to seeing a dithered [0] version but it was missing. In addition, shouldn't OKLAB already be perceptually uniform and not require luma weighting?
Kinda off-topic but for a while I’ve had an idea for a photography app where you’d take a picture, and then you could select a color in the picture and adjust it till it matched the color you see in reality. You could do that for a few colors and then eventually just map all the colors in the picture to be much closer to the perceived colors without having to do coarser post-processing.
Even if you got something very posterized like in the article I think it could at least be a great reference for a more traditional processing step afterwards. Always wonder why that doesn’t seem to exist yet.
I wonder if this is not like including in the photo a Macbeth Chart [1] and then trying to color match your image so that the swatches on the Macbeth Chart look the same digitally as well as in real life.
One bottleneck of course is that the display you are on, where you are viewing the image, is likely not to have a gamut rich enough to even display all the colors of the Macbeth chart. No amount of fiddling with knobs will get you a green as rich as reality if there is an intense green outside the display's capabilities.
(I seem to recall, BTW, that these Greytag-Macbeth color charts are so consistent because they are representing each color chemically. I mean, I suppose all dyes are chemical, but I understood that there was little to no mixing of pigments to get the Macbeth colors. I could be wrong about that though. My first thought when I heard it was of sulfur: for example, how pure sulfur, in one of its states, must be the same color every time. Make a sulfur swatch and you should be able to constantly reproduce it.)
Sounds like a lot of work for something which wouldn’t produce that good of a result. If you ever tried to take a colour from a picture with the eyedropper tool, you quickly realise what you see as one colour is in fact disparate number of pixels and it can be quite hard to get the exact thing you want. So right there you find the initial hurdle of finding and mapping the colour to change. Finding the edges would also be a problem.
Not to mention every screen is different, so whatever changes you’re doing, even if they looked right to you in the moment, would be useless when you sent your image to your computer for further processing.
Oh, and our eyes can perceive it differently too. So now you’re doing a ton of work to badly change the colours of an image so they look maybe a bit closer to reality for a single person on a single device.
This is essentially what you do as step 1 when color correcting in Davinci Resolve, but only for white (or, anything that's grayscale). Select a spot that's white/gray, click on the white balance picker, and the white balance is set.
It's not perfect of course, but gets a surprisingly good result for close to zero effort.
I don't know anything about the PICO-8, but that is an interesting palette. It reminds me of a more saturated version of the C64.
Other systems of the time either used a simple RGBI formula with modifications (IBM, with its "CGA brown"), or a palette evenly spaced around the NTSC hue wheel (Apple II, or again the CGA in composite output mode)
The main issue with any pixel-to-pixel colour mapping approach is that we don't perceive individual pixels so you will not get a good overall effect from pixel-to-pixel mapping (the article touches on this by talking about structure but you don;t have to go that far to see massively improved results).
Any serious attempt would involve higher level dithering to better reproduce the colours of the original image and dithering is one of those topics that goes unexpectedly crazy deep if you are not familiar with the literature.
An interesting article, but it seems like quite an oversight to not even mention dithering techniques, which in my opinion give much better results.
I've done some development work in Pico-8, and some time ago I wrote a plugin for the Aseprite pixel art editor to convert an arbitrary image into the Pico-8 palette using Floyd-Steinberg dithering[0]
I ran their example image through it, and personally I think the results it gives were the best of the bunch https://imgur.com/a/O6YN8S2
[0] https://github.com/aquova/aseprite-scripts/blob/master/pico-...
They don't explicitly state it in the article that I can see, but the PICO-8 is 128x128, and it appears that their output images were constrained to that. Your dithered images appear to be much higher resolution. I'd be curious what dithering would look like at 128x128!
Dithering is used quite frequently in PICO-8 projects at the “native” (128x128) resolution. Here’s an example from a few years ago: https://www.lexaloffle.com/bbs/?pid=110273#p
Direct link to the image, though you may have to fetch it through a non-browser to avoid it redirecting back to the stupid HTML: https://i.imgur.com/y93naNw.png
(I don’t know how it works for others, but it has always been atrocious for me. Their server is over 200ms away, and even with uBlock Origin blocking ten different trackers it takes fully 35 seconds before it even begins to load the actual image, and the experience once it’s finished is significantly worse than just navigating directly to the image anyway. Tried it in Chromium a couple of times, 55 and 45 seconds. Seriously, imgur is so bad. Maybe it ain’t so bad in the USA, I don’t know, but in Australia and in India it’s appallingly bad. You used to be able to open the image URLs directly, but some years ago they started redirecting to the HTML in general if not loading as a subresource or maybe something about accept headers; curl will still get it directly.)
https://addons.mozilla.org/firefox/addon/fucking-jpeg/
I too thought about dithering while reading the article, but couldn't have imagined the result would be this much better. Thanks for sharing!
Dithering is sort of like having the ability to "blend" any two colors of your palette (possibly even more than any two, if you use it well), so instead of being a 16-color pallete, it's like working with a 16+15+14+13+12+...=136-color pallete. It's a drastic difference (at the cost of graininess, of course).
Tried this online tool https://onlinetools.com/image/apply-dithering-to-image and Floyd and Atkinson both look great, Atkinson a bit better.
Dithering is still more important than is commonly known, even with 24-bit "true color". For example, imagine that you had a gradient that goes from white to black across a 1920x1080 monitor. 24-bit color means you only have 256 levels of color, so a naive gradient implementation will result in 256 discrete bands of different grays, each about 8 pixels wide (about as wide as this "w" character).
You might not thing that you'd notice that, but it looks surprisingly bad. Your eyes would immediately notice that there are "stripes" of solid gray instead of a smooth continuum. But if you apply dithering, your eyes won't be able to notice (at least not easily). It will all look smooth again.
In a situation like this, I like to use "blue noise" dithering, but there are scores of dithering methods to choose from.
Yeah I was counting on dithering too
Something I've noticed from automatic palette mappings is that they tend to produce large blocks of gray that a human artist would never consider. You can see it in the water for most mappings in this sample, and even some grayish-brown grass for sRGB. It makes sense mathematically, since gray is the "average" color, and pixel art palettes are typically much more saturated than the average colors in a 24-bit RGB image. It looks ugly regardless.
CAM16-UCS looks the best because it avoids this. It gives us peach-and-pink water that matches the "feel" of the original image better. I wonder if it's designed to saturate the image to match the palette?
I notice that many palettes tend to follow the "traditional" color wheel strictly, without defining pink as a separate color on the main wheel.
The PICO-8 actually has 32 colors from which we can choose any 16. I understand that making use of the default palette is the article's intent; I'm just thinking aloud.
If one were wanting to render an image on the PICO-8 itself, the ideal algorithm would select the best 16 colors from the full 32-color palette which, when dithered, produce the most perceptually accurate version of the original image in 128x128 pixels. Were I a smarter man I would create this, but alas.
I think this is how the original Myst image compression worked. Every image used an 8-bit palette, but each palette was custom for each image.
I was looking forward to seeing a dithered [0] version but it was missing. In addition, shouldn't OKLAB already be perceptually uniform and not require luma weighting?
[0]: https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dither...
Kinda off-topic but for a while I’ve had an idea for a photography app where you’d take a picture, and then you could select a color in the picture and adjust it till it matched the color you see in reality. You could do that for a few colors and then eventually just map all the colors in the picture to be much closer to the perceived colors without having to do coarser post-processing.
Even if you got something very posterized like in the article I think it could at least be a great reference for a more traditional processing step afterwards. Always wonder why that doesn’t seem to exist yet.
I wonder if this is not like including in the photo a Macbeth Chart [1] and then trying to color match your image so that the swatches on the Macbeth Chart look the same digitally as well as in real life.
One bottleneck of course is that the display you are on, where you are viewing the image, is likely not to have a gamut rich enough to even display all the colors of the Macbeth chart. No amount of fiddling with knobs will get you a green as rich as reality if there is an intense green outside the display's capabilities.
But of course you can try to get close.
[1] https://en.wikipedia.org/wiki/Color_chart
(I seem to recall, BTW, that these Greytag-Macbeth color charts are so consistent because they are representing each color chemically. I mean, I suppose all dyes are chemical, but I understood that there was little to no mixing of pigments to get the Macbeth colors. I could be wrong about that though. My first thought when I heard it was of sulfur: for example, how pure sulfur, in one of its states, must be the same color every time. Make a sulfur swatch and you should be able to constantly reproduce it.)
Sounds like a lot of work for something which wouldn’t produce that good of a result. If you ever tried to take a colour from a picture with the eyedropper tool, you quickly realise what you see as one colour is in fact disparate number of pixels and it can be quite hard to get the exact thing you want. So right there you find the initial hurdle of finding and mapping the colour to change. Finding the edges would also be a problem.
Not to mention every screen is different, so whatever changes you’re doing, even if they looked right to you in the moment, would be useless when you sent your image to your computer for further processing.
Oh, and our eyes can perceive it differently too. So now you’re doing a ton of work to badly change the colours of an image so they look maybe a bit closer to reality for a single person on a single device.
So this would be a subjective alternative to matching to color cards? What would the benefit be over a precise/objective match?
This is essentially what you do as step 1 when color correcting in Davinci Resolve, but only for white (or, anything that's grayscale). Select a spot that's white/gray, click on the white balance picker, and the white balance is set.
It's not perfect of course, but gets a surprisingly good result for close to zero effort.
I don't know anything about the PICO-8, but that is an interesting palette. It reminds me of a more saturated version of the C64.
Other systems of the time either used a simple RGBI formula with modifications (IBM, with its "CGA brown"), or a palette evenly spaced around the NTSC hue wheel (Apple II, or again the CGA in composite output mode)
Result of granddaddy https://web.archive.org/web/2000/http://fordy.planetunreal.g...
→ https://files.catbox.moe/2uuqka.png
It's bad. :-o
The main issue with any pixel-to-pixel colour mapping approach is that we don't perceive individual pixels so you will not get a good overall effect from pixel-to-pixel mapping (the article touches on this by talking about structure but you don;t have to go that far to see massively improved results).
Any serious attempt would involve higher level dithering to better reproduce the colours of the original image and dithering is one of those topics that goes unexpectedly crazy deep if you are not familiar with the literature.