From my experience, when you write something for a tv applications (poor display)
there are some things, like a one pixel white line on a black background, you can't do otherwise you get flickering.
Interesting point, but on a television display, this problem is actually due to interlace, which doesn't happen on a computer display.
Interlaced scanning (as opposed to progressive scanning) is a way to reduce large-area flicker by scanning all the even-numbered lines on the screen, then all the odd-numbered lines. Thus, the area of the screen is covered twice as rapidly as with a progressive scan, without using any more bandwidth, and hence the flicker is greatly reduced.
But, as you say, this fails on thin, sharp, horizontal features, where you get a large change in brightness between adjacent lines. The feature is then only refreshed every alternate scan and so flickers at half the scan rate. This, traditionally, was not a problem for television, because natural scenes tend not to have these sharp edges, and the nature of the cameras limited the resolution. But when images started to be generated electronically, such as captions, anti-aliasing (blurring of the edges) had to be done to prevent "interlace twitter".
History lesson over!