Why do terminal emulators still have screen flicker

bufferperformanceterminal

Why are there still visual artifacts when terminal emulators draw text-based applications? This on recent computers that render 3D games and GUI windows including anti-aliased vector fonts with no artifacts.

I regularly see the following artifacts which reveal intermediate steps of the screen update process:

  • Terminal cursor motion (cursor blinks or jumps around the screen during update)
  • Tearing (part of screen shows old stuff while other part is showing new stuff)
  • Scrolling (scrolling is noticeable, instead of new scroll position being shown right away)

The artifacts are only seen for sub-second intervals, and not during most screen updates, but having grown up on flicker-free GUIs I would still like to know how to avoid them. All of the above artifacts (except scrolling) can be seen in e.g. the following ASCIInema video once it starts drawing the more complex screens: MapSCII – the whole world in your console!

I'm also specifically not talking about slow updates. It would be nice if updates were always instantaneous, but that's not always possible due to network and processing delays. What I mean here is that partially drawn screens are often visible for a brief moment. In most modern GUIs only fully finished screens are shown to the user, and artifacts of partial drawing are very rare.

It's my impression that the terminal emulation pipeline goes something like this:

  1. User presses a key on the keyboard
  2. Kernel passes keypress from keyboard driver to window system
  3. Window system passes keypress to terminal emulator
  4. Terminal emulator passes keypress to pseudo-terminal (pty) kernel device
  5. Pty interprets keypress and passes the result to text-based application
  6. Application performs command in response to keypress
  7. Application renders new screen (character cell grid) to internal buffer
  8. Application calls curses or other library to convert character cell grid to ANSI escape codes that will render an equivalent screen on the terminal
  9. Library writes those ANSI escape codes to the pty device
  10. Pty processes the written data somehow
  11. Terminal emulator reads processed data from pty in some chunks
  12. Terminal emulator calls window system to render the result of the ANSI escape codes in terminal window

Which of the above steps can slow down the process enough that the terminal emulator shows us intermediate rendering steps instead of showing only the final result?

  • It seems that the speed of hardware terminals (serial port connections) is dictated by their baud rate which can be changed with tcsetattr() but I read from multiple sources that the baud rate setting has no effect on the pseudo-terminal (pty) devices used by terminal emulators. Does this mean that Unix kernels do not deliberately rate-limit pty communications?

  • Do applications or rendering libraries (curses, etc.) send text and ANSI codes in multiple writes instead of trying to make do with only one write()?

  • Unix kernels have size limits on their internal I/O buffers, which affects things like the maximum amount of data that can be sent through a pipe without blocking. Does this affect rendering terminal screens with lots of detail (a screenful of text, lots of colors, etc.)? I imagine the combined text and ANSI escape codes could amount to so much data that it doesn't fit in the pty driver's buffer, which would split a screen update into several write operations by the application and several reads by the terminal emulator. If the terminal emulator were eager to display the results of each read before processing the next, this would cause the display to flicker until the final read in a batch has been processed.

  • Do terminal emulators or pty drivers have deliberate timeouts for batch processing so that their behavior more closely mimics hardware terminals, feels more natural, or addresses some other concern that was deemed more important than display speed?

Recently there has been some effort to make new terminal emulators that render faster (e.g. by pre-rendering fonts into OpenGL textures in video memory). But these efforts only seem to hasten the rendering of a character cell grid onto a screen bitmap once the grid has been calculated.

There seems to be something else going on that makes this stuff fundamentally slow even on a very fast computer. Think about it: if the terminal emulator processes all the ANSI codes to obtain a character cell grid before rendering anything into a screen bitmap, then it doesn't matter how slow the character-grid-to-bitmap rendering routines are – there should be no flicker (at least not the kind of flicker that clearly seems to correspond to cursor movement on a hardware terminal, which is what we often see). Even if the terminal emulator took a whole second to draw any given character cell grid on the screen, we would simply get a second of inactivity, not a second of flicker.

A similar issue is that the Unix clear and reset commands are incredibly slow for what they do (from a GUI user's perspective, they don't do anything more complex than redraw a bitmap). Perhaps for related reasons.

Best Answer

I'd love to hear more details how exactly to trigger such prominent flickers, as I don't notice any while using my system.

On my system, VTE (the engine behind GNOME Terminal) can process about 10 MB/s of incoming data. The performance of other emulators aren't very far away from this either, maybe within a factor of 3 or 5 in both directions. This a lot, should be more than enough for flickes-less updates.

Keep in mind though that a fullscreen terminal might contain a few tens of thousands of character cells. UTF-8 characters consist of multiple bytes. Switching to different attributes (colors, boldness etc.) requires escape sequences that can go from 3–4 to easily 10–20 bytes (especially with 256-color and truecolor extensions). Truly complex layouts can thus require 100 kB or even larger amount of traffic. This sure cannot be passed over the tty line in a single step. I'm even uncertain whether certain apps (or screen drawing libraries) care to buffer the entire output in a single step. Perhaps they just use printf() and let stdio flush them after every 8 kB or so. This could be another reason for them being somewhat slow.

I'm not really familiar with the kernel's scheduling behavior, e.g. whether it needs to toggle back and forth between the two processes as well as user/kernel modes, or whether they can run simultaneously on a multi-threaded CPU. I truly hope though that they can run simultaneously on a multi-core CPU, which most CPUs are nowadays.

There is no intentional throttling in the story. There might be guesswork, though, when emulators decide whether to continue reading data, or update their screen. For example, if the terminal emulator processes the input faster than the app emits them, it sees it stalling after processing the first chunk, and thus might reasonably decide to update its UI.

The cursor is probably the most prominent to flicker, since the cursor moves along the screen as the contents are being updated. It cannot stay at the same place. If the emulator updates its screen just once while receiving the input data, and the cursor eventually remains at the same location, this flicker most likely become visible.

You might be interested in this proposal for atomic updates (discussions here) that would mostly solve this issue, if supported by both the terminal emulator as well as the application running inside.

You might also be interested in why the scrolling experience with the keyboard is necessarily jerky due to an interference between the keyboard repeat rate and the monitor refresh rate, something that isn't flickering per se, but causes an unpleasant experience.

Related Question