Since time immemorial, iOS developers have been perplexed by a singular question:
“How do you resize an image?”
It’s a question of beguiling clarity, spurred on by a mutual mistrust of developer and platform. Myriad code samples litter Stack Overflow, each claiming to be the One True Solution™ — all others, mere pretenders.
In this week’s article,
we’ll look at 5 distinct techniques to image resizing on iOS
(and macOS, making the appropriate UIImage
→ NSImage
conversions).
But rather than prescribe a single approach for every situation,
we’ll weigh ergonomics against performance benchmarks
to better understand when to use one approach over another.
When and Why to Scale Images
Before we get too far ahead of ourselves,
let’s establish why you’d need to resize images in the first place.
After all,
UIImage
automatically scales and crops images
according to the behavior specified by its
content
property.
And in the vast majority of cases,
.scale
, .scale
, or .scale
provides exactly the behavior you need.
image View.content Mode=.scale Aspect Fitimage View.image=image
So when does it make sense to resize an image?
When it’s significantly larger than the image view that’s displaying it.
Consider this stunning image of the Earth, from NASA’s Visible Earth image catalog:
At its full resolution,
this image measures 12,000 px square
and weighs in at a whopping 20 MB of JPEG data.
20MB of memory is nothing on today’s hardware,
but that’s just its compressed size.
To display it,
the UIImage
needs to decode that JPEG into a bitmap.
Set that full-sized image on an image view as-is,
and your app’s memory usage will balloon to
hundreds of Megabytes of memory,
with no appreciable benefit to the user
(a screen can only display so many pixels, after all).
Not only that,
because that’s happening on the main thread,
it can cause your app to freeze for a couple seconds.
By simply resizing that image to the size of the image view
before setting its image
property,
you can use an order-of-magnitude less RAM and CPU time:
Memory Usage (MB) | |
---|---|
Without Downsampling | 220.2 |
With Downsampling | 23.7 |
This technique is known as downsampling, and can significantly improve the performance of your app in these kinds of situations. If you’re interested in some more information about downsampling and other image and graphics best practices, please refer to this excellent session from WWDC 2018.
Now, few apps would ever try to load an image this large… but it’s not too far off from some of the assets I’ve gotten back from design. (Seriously, a 10MB PNG of a color gradient?) So with that in mind, let’s take a look at the various ways that you can go about resizing and downsampling images.
Image Resizing Techniques
There are a number of different approaches to resizing an image, each with different capabilities and performance characteristics. And the examples we’re looking at in this article span frameworks both low- and high-level, from Core Graphics, vImage, and Image I/O to Core Image and UIKit:
- Drawing to a UIGraphicsImageRenderer
- Drawing to a Core Graphics Context
- Creating a Thumbnail with Image I/O
- Lanczos Resampling with Core Image
- Image Scaling with vImage
For consistency, each of the following techniques share a common interface:
funcresized Image(aturl:URL,forsize:CGSize)->UIImage?{...}image View.image=resized Image(at:url,for:size)
Here, size
is a measure of point size,
rather than pixel size.
To calculate the equivalent pixel size for your resized image,
scale the size of your image view frame by the scale
of your main UIScreen
:
letscale Factor=UIScreen.main.scaleletscale=CGAffine Transform(scale X:scale Factor,y:scale Factor)letsize=image View.bounds.size.applying(scale)
Technique #1: Drawing to a UIGraphicsImageRenderer
The highest-level APIs for image resizing are found in the UIKit framework.
Given a UIImage
,
you can draw into a UIGraphics
context
to render a scaled-down version of that image:
importUIKit// Technique #1funcresized Image(aturl:URL,forsize:CGSize)->UIImage?{guardletimage=UIImage(contents Of File:url.path)else{returnnil}letrenderer=UIGraphics Image Renderer(size:size)returnrenderer.image{(context)inimage.draw(in:CGRect(origin:.zero,size:size))}}
UIGraphics
is a relatively new API,
introduced in iOS 10 to replace the older,
UIGraphics
/ UIGraphics
APIs.
You construct a UIGraphics
by specifying a point size
.
The image
method takes a closure argument
and returns a bitmap that results from executing the passed closure.
In this case,
the result is the original image scaled down to draw within the specified bounds.
Technique #2: Drawing to a Core Graphics Context
Core Graphics / Quartz 2D offers a lower-level set of APIs that allow for more advanced configuration.
Given a CGImage
,
a temporary bitmap context is used to render the scaled image,
using the draw(_:in:)
method:
importUIKitimportCore Graphics// Technique #2funcresized Image(aturl:URL,forsize:CGSize)->UIImage?{guardletimage Source=CGImage Source Create With URL(urlasNSURL,nil),letimage=CGImage Source Create Image At Index(image Source,0,nil)else{returnnil}letcontext=CGContext(data:nil,width:Int(size.width),height:Int(size.height),bits Per Component:image.bits Per Component,bytes Per Row:image.bytes Per Row,space:image.color Space??CGColor Space(name:CGColor Space.s RGB)!,bitmap Info:image.bitmap Info.raw Value)context?.interpolation Quality=.highcontext?.draw(image,in:CGRect(origin:.zero,size:size))guardletscaled Image=context?.make Image()else{returnnil}returnUIImage(cg Image:scaled Image)}
This CGContext
initializer takes several arguments to construct a context,
including the desired dimensions and
the amount of memory for each channel within a given color space.
In this example,
these parameters are fetched from the CGImage
object.
Next, setting the interpolation
property to .high
instructs the context to interpolate pixels at a 👌 level of fidelity.
The draw(_:in:)
method
draws the image at a given size and position, a
allowing for the image to be cropped on a particular edge
or to fit a set of image features, such as faces.
Finally,
the make
method captures the information from the context
and renders it to a CGImage
value
(which is then used to construct a UIImage
object).
Technique #3: Creating a Thumbnail with Image I/O
Image I/O is a powerful (albeit lesser-known) framework for working with images. Independent of Core Graphics, it can read and write between many different formats, access photo metadata, and perform common image processing operations. The framework offers the fastest image encoders and decoders on the platform, with advanced caching mechanisms — and even the ability to load images incrementally.
The important
CGImage
offers a concise API with different options than found in equivalent Core Graphics calls:
importImage IO// Technique #3funcresized Image(aturl:URL,forsize:CGSize)->UIImage?{letoptions:[CFString:Any]=[k CGImage Source Create Thumbnail From Image If Absent:true,k CGImage Source Create Thumbnail With Transform:true,k CGImage Source Should Cache Immediately:true,k CGImage Source Thumbnail Max Pixel Size:max(size.width,size.height)]guardletimage Source=CGImage Source Create With URL(urlasNSURL,nil),letimage=CGImage Source Create Thumbnail At Index(image Source,0,optionsasCFDictionary)else{returnnil}returnUIImage(cg Image:image)}
Given a CGImage
and set of options,
the CGImage
function
creates a thumbnail of an image.
Resizing is accomplished by the k
option,
which specifies the maximum dimension
used to scale the image at its original aspect ratio.
By setting either the
k
or
k
option,
Image I/O automatically caches the scaled result for subsequent calls.
Technique #4: Lanczos Resampling with Core Image
Core Image provides built-in
Lanczos resampling functionality
by way of the eponymous CILanczos
filter.
Although arguably a higher-level API than UIKit,
the pervasive use of key-value coding in Core Image makes it unwieldy.
That said, at least the pattern is consistent.
The process of creating a transform filter, configuring it, and rendering an output image is no different from any other Core Image workflow:
importUIKitimportCore Imageletshared Context=CIContext(options:[.use Software Renderer:false])// Technique #4funcresized Image(aturl:URL,scale:CGFloat,aspect Ratio:CGFloat)->UIImage?{guardletimage=CIImage(contents Of:url)else{returnnil}letfilter=CIFilter(name:"CILanczos Scale Transform")filter?.set Value(image,for Key:k CIInput Image Key)filter?.set Value(scale,for Key:k CIInput Scale Key)filter?.set Value(aspect Ratio,for Key:k CIInput Aspect Ratio Key)guardletoutput CIImage=filter?.output Image,letoutput CGImage=shared Context.create CGImage(output CIImage,from:output CIImage.extent)else{returnnil}returnUIImage(cg Image:output CGImage)}
The Core Image filter named CILanczos
accepts an input
, an input
, and an input
parameter,
each of which are pretty self-explanatory.
More interestingly,
a CIContext
is used here to create a UIImage
(by way of a CGImage
intermediary representation),
since UIImage(CIImage:)
doesn’t often work as expected.
Creating a CIContext
is an expensive operation,
so a cached context is used for repeated resizing.
Technique #5: Image Scaling with vImage
Last up,
it’s the venerable Accelerate framework—
or more specifically,
the v
image-processing sub-framework.
vImage comes with a bevy of different functions for scaling an image buffer. These lower-level APIs promise high performance with low power consumption, but at the cost of managing the buffers yourself (not to mention, signficantly more code to write):
importUIKitimportAccelerate.v Image// Technique #5funcresized Image(aturl:URL,forsize:CGSize)->UIImage?{// Decode the source imageguardletimage Source=CGImage Source Create With URL(urlasNSURL,nil),letimage=CGImage Source Create Image At Index(image Source,0,nil),letproperties=CGImage Source Copy Properties At Index(image Source,0,nil)as?[CFString:Any],letimage Width=properties[k CGImage Property Pixel Width]as?v Image Pixel Count,letimage Height=properties[k CGImage Property Pixel Height]as?v Image Pixel Countelse{returnnil}// Define the image formatvarformat=v Image_CGImage Format(bits Per Component:8,bits Per Pixel:32,color Space:nil,bitmap Info:CGBitmap Info(raw Value:CGImage Alpha Info.first.raw Value),version:0,decode:nil,rendering Intent:.default Intent)varerror:v Image_Error// Create and initialize the source buffervarsource Buffer=v Image_Buffer()defer{source Buffer.data.deallocate()}error=v Image Buffer_Init With CGImage(&source Buffer,&format,nil,image,v Image_Flags(kv Image No Flags))guarderror==kv Image No Errorelse{returnnil}// Create and initialize the destination buffervardestination Buffer=v Image_Buffer()error=v Image Buffer_Init(&destination Buffer,v Image Pixel Count(size.height),v Image Pixel Count(size.width),format.bits Per Pixel,v Image_Flags(kv Image No Flags))guarderror==kv Image No Errorelse{returnnil}// Scale the imageerror=v Image Scale_ARGB8888(&source Buffer,&destination Buffer,nil,v Image_Flags(kv Image High Quality Resampling))guarderror==kv Image No Errorelse{returnnil}// Create a CGImage from the destination bufferguardletresized Image=v Image Create CGImage From Buffer(&destination Buffer,&format,nil,nil,v Image_Flags(kv Image No Allocate),&error)?.take Retained Value(),error==kv Image No Errorelse{returnnil}returnUIImage(cg Image:resized Image)}
The Accelerate APIs used here clearly operate at a much lower-level than any of the other resizing methods discussed so far. But get past the unfriendly-looking type and function names, and you’ll find that this approach is rather straightforward.
- First, create a source buffer from your input image,
- Then, create a destination buffer to hold the scaled image
- Next, scale the image data in the source buffer to the destination buffer,
- Finally, create an image from the resulting image data in the destination buffer.
Performance Benchmarks
So how do these various approaches stack up to one another?
Here are the results of some performance benchmarks performed on an iPhone 7 running iOS 12.2, in this project.
The following numbers show the average runtime across multiple iterations for loading, scaling, and displaying that jumbo-sized picture of the earth from before:
Time (seconds) | |
---|---|
Technique #1: UIKit | 0.1420 |
Technique #2: Core Graphics 1 | 0.1722 |
Technique #3: Image I/O | 0.1616 |
Technique #4: Core Image 2 | 2.4983 |
Technique #5: v | 2.3126 |
1
Results were consistent across different values of CGInterpolation
, with negligible differences in performance benchmarks.
2
Setting k
to true
on the options passed on CIContext
creation yielded results an order of magnitude slower than base results.
Conclusions
- UIKit, Core Graphics, and Image I/O
all perform well for scaling operations on most images.
If you had to choose one (on iOS, at least),
UIGraphics
is typically your best bet.Image Renderer - Core Image is outperformed for image scaling operations. In fact, according to Apple’s Performance Best Practices section of the Core Image Programming Guide, you should use Core Graphics or Image I/O functions to crop and downsampling images instead of Core IMage.
- Unless you’re already working with
v
, the extra work necessary to use the low-level Accelerate APIs probably isn’t justified in most circumstances.Image