dev, computing and games

Answer: mostly, yes. Explanation below.

Part 1: Yes or No

Remember GDI? Say you're using GDI and Win32, and you want to draw some graphics to a window. What to do. You read the documentation and see what looks like the most obvious thing: "SetPixel". Sounds good. Takes an x and y and a color. What more could you want? Super easy to use.

But then, you see a bunch of cautionary notes. "It's slow." "It's inefficient." "Don't do it."

Don't do it?

Well. All these cautionary notes you see are from days of yore:

  • Computers are faster now. Both CPU and GPU. Take an early CS algorithms class, experiment with solutions. You’ll see sometimes the biggest optimization you can do is to get a faster computer.
  • An earlier Windows graphics driver model. Say, XPDM not WDDM. WDDM means all hardware-accelerated graphics communicate through a “Direct3D-centric driver model”, and yes that includes GDI. Changes in driver model can impose changes in performance characteristics.
  • Different Windows presentation model. That's something this API is set up to negotiate with, so it could affect performance too. Nowadays you're probably using DWM. DWM was introduced with Windows Vista.

The date stamps give you skepticism. Is that old advice still true?

As a personal aside, I've definitely seen performance advice from people on dev forums that is super outdated and people get mis-led into following it anyway. For example for writing C++ code, to "manually turn your giant switch case into a jump table". I see jump tables in my generated code after compilation... The advice was outdated because of how much compilers have improved. I've noticed a tendency to trust performance advice "just in case", without testing to see if it matters.

Let us run some tests to see if SetPixel is still slow.

I wrote a benchmark program to compare

  • SetPixel, plotting each pixel of a window sequentially one by one, against
  • SetDIBits, where all pixels of a window are set from memory at once.

In each case the target is a top-level window, comparing like sizes. Each mode effectively clears the window. The window is cleared to a different color each time, so you have some confidence it’s actually working.

Timing uses good old QPC. For the sizes of timespans involved, it was not necessary to get something more accurate. The timed interval includes all the GDI commands needed to see the clear on the target, so for SetDIBits that includes one extra BitBlt from a memory bitmap to the target to keep things fair.

The source code of this benchmark is here.

Here are the results

Width Height Pixel Count SetPixel SetDIBits
1000 1000 1000000 4.96194 0.0048658
950 950 902500 4.7488 0.0042761
900 900 810000 4.22436 0.0038637
850 850 722500 3.71547 0.0034435
800 800 640000 3.34327 0.0030824
750 750 562500 2.92991 0.0026711
700 700 490000 2.56865 0.0023415
650 650 422500 2.21742 0.0022196
600 600 360000 1.83416 0.0017374
550 550 302500 1.57133 0.0015125
500 500 250000 1.29894 0.001311
450 450 202500 1.05838 0.0010062
400 400 160000 0.826351 0.0009907
350 350 122500 0.641522 0.0006527
300 300 90000 0.467687 0.0004657
250 250 62500 0.327808 0.0003364
200 200 40000 0.21523 0.0002422
150 150 22500 0.118702 0.0001515
100 100 10000 0.0542065 9.37E-05
75 75 5625 0.0315026 0.000122
50 50 2500 0.0143235 6.17E-05

Viewed as a graph:

Conclusion: yeah, SetDIBits is still way faster than SetPixel in general, in all cases.

For small numbers of pixels, the difference doesn't matter as much. For setting lots of pixels, the difference is a lot.

I tested this on an Intel Core i7-10700K, with {NVIDIA GeForce 1070 and WARP} with all similar results.

So the old advice is still true. Don't use SetPixel, especially if you’re setting a lot of pixels. Use something else like SetDIBits instead.

Part 2: Why

My benchmark told me that it’s still slow, but the next question I had was ‘why’. I took a closer look and did some more thinking about why it could be.

It's not one reason. There's multiple reasons.

1. There's no DDI for SetPixel.

You can take a look through the public documentation for display devices interfaces, and see what’s there. Or, take a stab at it and use the Windows Driver Kit and the provided documentation to write a display driver yourself. You’ll see what’s there. You’ll see various things. You’ll see various blit-related functions in winddi.h. For example, DrvBitBlt:

BOOL DrvBitBlt(
  [in, out]      SURFOBJ  *psoTrg,
  [in, optional] SURFOBJ  *psoSrc,
  [in, optional] SURFOBJ  *psoMask,
  [in]           CLIPOBJ  *pco,
  [in, optional] XLATEOBJ *pxlo,
  [in]           RECTL    *prclTrg,
  [in, optional] POINTL   *pptlSrc,
  [in, optional] POINTL   *pptlMask,
  [in, optional] BRUSHOBJ *pbo,
  [in, optional] POINTL   *pptlBrush,
  [in]           ROP4     rop4
);

That said, you may also notice what’s not there. In particular, there’s no DDI for SetPixel. Nothing simple like that, which takes an x, y, and color. It’s important to relate this to the diagrams on the “Graphics APIs in Windows” article, which shows that GDI talks to the driver for both XPDM and WDDM. It shows that every time you call SetPixel, then what the driver sees is actually far richer than that. It would get told about a brush, a mask, a clip. It’s easy to imagine a cost to formulating all of those, since they you don’t specify them at the API level and the API is structured so they can be arbitrary.

2. Cost of talking to the presentation model

There’s a maybe-interesting experiment you can do. Write a Win32 application with your usual WM_PAINT handler. Run the application. Hide the window behind other windows, then reveal it once again. Does your paint handler get called? To reveal the newly-revealed area? No, normally it doesn’t.

So what that must logically mean is that Windows kept some kind of buffer, or copy of your window contents somewhere. Seems like a good idea if you think about it. Would you really want moving windows around to be executing everyone’s paint handlers all the time, including yours? Probably not. It’s the good old perf-memory tradeoff in favor of perf, and it seems worth it.

Given that you’re drawing to an intermediate buffer, then there’s still an extra step needed in copying this intermediate buffer to the final target. Which parts should be copied, and when? It seems wasteful to be copying everything all the time. To know what needs to get re-copied, logically there has to be some notion of an “update” region, or a “dirty” region.

If you’re an application, you might even want to aggressively optimize and only paint the update region. Can you do that? At least at one point, yes you could. The update region gets communicated  to the application through WM_PAINT- see the article “Redrawing in the Update Region”. There’s a code example of clipping accordingly. Now, when I tried things out in my application I noticed that PAINTSTRUCT::rcPaint is always the full window, even in response to a small region invalidated with InvalidateRect, but the idea is at least formalized in the API there.

Still, there’s a cost to dealing with update regions. If you change one pixel, that one-pixel area is part of the update region. Change the pixel next to it, the region needs to be updated again. And so on. Could we have gotten away with having a bigger, coarser update region? Maybe. You just never know that at the time.

If you had some way of pre-declaring which regions of the window you’re going to change, (e.g., through a different API like BitBlt), then you wouldn’t have this problem.

3. Advancements in the presentation model help, but not enough

In Windows, there is DWM- the Desktop Window Manager. This went out with Windows Vista and brought about all kinds of performance improvements and opportunity for visual enhancements.

Like the documentation says, DWM makes it possible to remove level of indirection (copying) when drawing contents of Windows.

But it doesn’t negate the fact that there still is tracking of update regions, and all the costs associated with that.

4. Advancements in driver model help, but not enough

DWM and Direct3D, as components that talk to the driver through the 3D stack, have a notion of “frames” and a particular time at which work is “flushed” to the GPU device.

By contrast, GDI doesn’t have the concept of “frames” or flushing anything. Closest thing would be the release of the GDI device context, but that’s not strictly treated as a sign to flush. You can see it yourself in how your Win32 GDI applications are structured. You draw in response to WM_PAINT. Yes there is EndPaint, but EndPaint doesn’t flush your stuff. Try it if you want- comment out EndPaint. I tried it just to check and everything still works without it.

Since there isn’t a strict notion of “flushing to the device”, SetPixel pixels have to be dispatched basically immediately rather than batched up.

5. 3D acceleration helps, but not enough

Nowadays, GDI blits are indeed 3D accelerated.

I noticed this firsthand, too. Very lazy way to check- in the “Performance” tab in Task manager when I was testing my application, I saw little blips in the 3D queue. These coincided with activity in the SetPixel micro-benchmark.

Again, very lazy check. Good to know we are still accelerating these 2D blits, even as the graphics stack has advanced to a point of making 3D graphics a first-class citizen. Hardware acceleration is great for a lot of things, like copying large amounts of memory around at once, applying compression or decompression, or manipulating data in other ways that lend itself to data-parallelism.

Unfortunately, literally none of that helps this scenario. Parallelism? How? At a given time, the driver doesn’t know if you’re done plotting or what you will plot next or where. And it can’t buffer up the operations and execute them together, because it, like Windows, doesn’t know when you’re done. Maybe, it could use some heuristic.

But that brings this to the punchline: even if the driver had psychic powers, it could see into the future and know exactly what the application is going to do and did an absolutely perfect job of coalescing neighboring blits together, it doesn’t negate any of the above costs, especially 1. and 2.

Conclusion

Even in the current year, don’t use SetPixel for more than a handful of pixels. There’s reasons to believe the sources of the bottlenecks to have changed over 30 years, yet even still the result is the same. It’s slow and the old advice is still true.

Epilogue: some fantasy world

This post was about how things are. But, what could be? What would it take for SetPixel not to be slow? The most tempting way to think about this is to flatten or punch holes through the software stack. That works, even if it feels like a cop-out.

March 1st, 2022 at 5:04 am