Hello FITC #WebU17

Thank you again for having me and sticking around for the last slot of the day. Here are my slides.

ADDENDUM

Wanted to simply add some extra notes and clarifications where I may not have had the time to do so.

Ux is a very big part of the web performance, and is something that should be kept in the highest order. This is the very reason why Facebook will remind you how news appears in their feed. They go one to remind you of some best practices which essentially include overall compress, and more specifically compression for images.

I showed you a shot of the world map and their mobile connectivity. This was according to Facebook's data, and their 1B+ users accessing the social network. Something I like to remind persons is that it's never so much about bandwidth when thinking of access to the web: it's all about latency. And this is a well referenced document that was released sometime back, still proves correct to this present day.

The Motion Mark test I did is one you can also run yourself. It gives you a quick and painless overview of your devices graphic capabilities. You should try it as well, it's eye opening. Motion Mark benchmark.

Loosely related, image decoding can be a big deal as it's part of much of the run-time optimizations that browsers make to make sure things are as smooth as possible for the users: again, back to user experience. So it's important that we size images properly, and don't send wasteful image data. The chart I posted was there to show how image processing becomes exponentially taxing as the size grows. You can see what happens after 500px per side in this quick test when comparing normal x1 and x2 Retina™ images. Remember that on paper, for every single unit of x1 images, you need x4 more pixels for x2 Retina™ images, and x9 for x3 screen (iPhone 8 anyone?). And the more the pixels, the greater then encoding, that also means the greater the memory. #thinkAboutThat.

I made a mention of avg page weight in the talk, and you can certainly take a look at some of of the data and trends yourself. They're located at the Httparchive repository.

Now let's quickly chat chroma subsampling and Photoshop. I'd like to be as clear as possible - since it was such a well behaved crowd!
I mentioned that anything greater than a 50% quality saving using save for web or having a setting > 6 in save as jpeg will result in a YCbCr pattern of 4:4:4, and the target is 4:2:0. The goal should not be to save at low res. The mention was mostly to remind you that you still have some optimizations left when saving from Photoshop. You are recommended to still save your jpeg at a high quality setting, but also insure that you then send it through something like ImageOptim to a) strip what metadata was added or remains, and b) get that 4:2:0 sub sampling setting we desire.

In a follow up of chroma subsampling, my mention of art direction was to remind you of the following: colour detail is less perceivable than lighting detail. As such, playing around and ultimately removing colour data is what chroma subsampling is all about, where we remove chroma information (the CbCr of the YCbCr). But removing light or luma info (Y), though more noticeable, will create less contrast, and smoother changes in colours, which is in fact easier to encode as a jpg. Which is why you see the size difference in the rudimentary example I presented - to make the point clear. The Netflix article highlights this very concept. They created an animated film with a dark theme and were able to create a low data stream.

When compressing Bojack Horseman, the brilliantly dark Netflix original series about a washed up former sitcom star, Katsavounidis noticed the company could achieve very high image quality with very little data. This is thanks to the simple colour palettes and large, low contrast images typical of a cartoon. - Reed Hastings, Netflix

Have some more questions? Hit me up on twitter and I'll do my best to clarify it all. Thx!