7 signs of a healthy image processing pipeline

#1: Your processing pipeline has less complexity – and it’s done on the fly.
On-the-fly image processing simplifies your pipeline by eliminating the need to pre-generate and store multiple versions of each image. So, instead of managing countless asset variants, you generate exactly what’s needed, when it’s needed. Plus, if you need to generate new variants due to something like design changes or new website blocks – consider it quickly done.
This approach reduces operational complexity, minimizes storage overhead, and keeps things lean. While serving static images will always be faster, with a proper pipeline, the end users will be unlikely to perceive any significant difference. (By the way, our own imgproxy has benchmarks to back this up, and our users agree, too.)
#2: Your storage needs have less complexity and are reduced.
Apps rarely use just the original image. They require thumbnails, responsive variants, and tailored formats — quickly multiplying total image counts. And managing all of these manually means more storage, more pre-processing, and more room for error.
On-the-fly processing cuts through that complexity. You don’t need to pre-generate and store every possible image version — only the ones users actually request.
And when this pipeline is paired with managed CDNs like CloudFront or Cloudflare, you can cache processed images close to users without managing infrastructure or worrying about where those images live.
CDNs handle caching and delivery, keeping latency low and load off your servers. Cached images can even be discarded and regenerated when needed — no manual cleanup required.
A good example comes from Trendyol, one of Turkey’s largest e-commerce platforms. With dozens of image variants per product, they found it impractical to store every variant. Instead, they built a full CDN-backed pipeline powered by imgproxy to serve only what’s needed – when it’s needed.
#3: Solid safety mechanisms are in place
Since imgproxy came about in a situation when we weren’t satisfied with an existing solution’s level of security – yes, the imgproxy team has a thing for security. Whether it’s malicious attacks, image/zip bombs, or vulnerable links to images, you need to have all possible mechanisms in place to protect your infrastructure.
For instance, we use a mechanism to sign the URLs with a secret key/salt pair. You can configure imgproxy to require a signature and create a valid signature to use in your URLs.
Glass, a community platform for photographers, was concerned about limited control over who could access these images. By moving their image processing from a SaaS to their own side of things, they were able to get relief (and made the switch to imgproxy in parallel!)
#4: Flexible deployment
It’s not only important that you can deploy your image processing in a convenient and affordable environment, but also easily migrate to another as needed (and then to another one if needed again).
Self-hosted solutions here provide much greater freedom of choice and full control in scaling the service. For instance, some of our customers recently migrated from a default Heroku setup with auto-scaling to AWS Lambda, which you can read about here: (Almost) free image processing with imgproxy and AWS Lambda.
Is your processing pipeline on-the-fly and ready to fly (to another environment)?
#5: It doesn’t cost a fortune.
Appropriate affordability means you can manage your bills while you’re scaling. If the pricing of your image optimization solution depends on post-paid plans and any metrics (e.g., the number of images or transformations), which growth you cannot predict, then you’ll be in trouble sooner or later.
Recently, we described unpredictable pricing models that SaaS solutions offer (and pitfalls you can encounter if you follow these approaches) and the alternative way of pricing imgproxy implements. Here is the blog post.
#6: You don’t spend much on maintenance.
If any of the following are true:
- Your solution requires a more complex deployment configuration
- You have to slow down your update frequency because it’s a clunky and complex process
- Your engineers spend a lot of effort doing things.
…then it’s not looking so good for your image processing solution.
Meanwhile, an effective, well-maintained solution smooths things out at every step of building an image processing pipeline from installation and onward. Plus, it should receive regular updates and is easy to deploy and manage in production.
Let’s look at Wowa’s use case as an example, taken from this post: they were able to avoid the complexity of implementing any future changes (and remove legacy code from their backend) so their developers could manage responsive images and quickly code multiple responsive image formats.
#7: In a well-established pipeline, you have support for all the formats you need.
It’s kind of funny that some people still continue to call WebP (released in 2010!) and AVIF (released in 2019) as “new” or “next-gen” formats. But your image processing should be able to include it into the process if required. To illustrate, JPEG XL is gaining some hype, and imgproxy supports it right out of the box, which has played in our favor!
In fact, it’s critical that format conversion is never a blocker, and you must have full compatibility for users on older browsers and brand-new Apple devices.
So, did your pipeline tick all the boxes? Let us know!