PrePix Virtualization Software

I cannot speculate as to the implications for their sync mechanism, except to observe that the rasters of the three screen segments visible in this video seem to be in good sync, though the frames themselves do not.

A PrePix Screenshot

Different technologies are modeled by selecting a Pixel Model, which is nothing more than a macro image representing a single pixel of the intended display technology displaying white at full brightness. Some sample Pixel Model images representing LCD and various LED technologies are included, but additional Pixel Models may be added easily by the user.

PrePix includes a graphical interface for setting up a virtual sign with parameters for Pixel Model selection, pixel dimensions, physical sign dimensions, brightness gain, RGB level adjustments, and source video selection. PrePix supports 30fps wmv video playback as well as jpg, gif and bmp images.

Source video is automatically up-sampled or down-sampled to match the pixel dimensions of the virtual video surface. Then, a Pixel Model is applied, followed by the Gain and Color Balance layers. The virtual display is viewable in a virtual 3D space.

In the 3D virtual space, the user may select from preset views including “Pixel Match,” which attempts to match large virtual LED pixels onto the PrePix user’s computer display so that the user may get a sense of the look of a virtual video surface in a real, physical space. Viewing a real 11mm LED display from 10 feet away should be comparable to viewing such a virtual display Pixel Matched to a 24” LCD display from the same distance, with the only significant difference being brightness. If this kind of modeling is deemed comparable and effective, then PrePix can be used to determine optimal viewing distances for different pixel pitches and LED technologies without requiring a lot of sample hardware.

PixelMatch close-up

An even more concrete use of PrePix is to determine the effectiveness of video or image content when displayed on certain low-resolution LED signs. The LED signs on the sides of MTA busses in NYC have pixel dimensions of 288×56. They are often sourced with video content that was obviously designed for a higher resolution display. With PrePix, a content producer can easily preview his video content on a virtual LED bus sign to check text legibility and graphical effectiveness.

 288x56-pixel source image

288x56-pixel source image

 Close-up of 3D video display model

Close-up of 3D video display model

 Full view of virtual LED sign. The moire patterns is comparable to that 
which would be seen in a digital photo of a real LED display.

Full view of virtual LED sign. The moire patterns is comparable to that which would be seen in a digital photo of a real LED display.

3Byte would like to develop PrePix further, depending on feedback from users. Please let us know what you think!

PrePix can be downloaded here.

PrePix System Requirements:

  • Windows XP, Vista or 7
  • A graphics card supporting OpenGl 3.0 and FBOs (Nearly all cards less than 18 months old.)

3Byte can be contacted at info@3-byte.com

Times Square Signs in Slow-Motion

And now, by popular demand, a few high-speed captures of LED signs in Times Square:

(All videos were captured at 1000fps with a Casio Exilim EX-FS10, rendered here as 30fps video.)

When transitioning from one frame to the next, the M&M’s sign and all components of the ABC sign update all pixels on the sign at once, demonstrated here in less than one of my video capture frames, so we know it’s less than 1ms. The different components of the ABC sign seem to update at different times because of a sync issue, not an LED technology issue.

LCD displays will lock to incoming signals with a variety of timings, often anywhere from ~57Hz to ~63Hz, depending on the display. When you tell a graphics card to output at 60Hz, it is doubtful that it’s putting out a true 60.000Hz. Depending on the make of the card, the drivers installed and the phase of the moon, the actual refresh rate will vary quite a bit. LED signs tend to maintain their own clock, regardless of the video signal that might be driving it. It is likely that without a genlock signal keeping the system in sync, the source signal will not provide frames at exactly the same frame rate that the LED sign is displaying them. The more disparate the source and display refresh rates, the more dropped or doubled frames you will observe, which will be visible to the layman as stuttering.

The Reuters sign, however, DOES update one row at a time, just like an LCD. The passing of the raster takes about 14ms, as we might expect for a sign running at 60Hz:

I cannot speculate as to the implications for their sync mechanism, except to observe that the rasters of the three screen segments visible in this video seem to be in good sync, though the frames themselves do not.

Video Synchronization Testing Part II

In a previous post, I analyzed a video synchronization test from a recent video installation, and suggested that although the synchronization between screens was not perfect, it was certainly satisfactory, and as good as might be expected given the design of the system. Now, lets see why.

When a multi-screen video system is in proper sync, each video frame begins at exactly the same time. A system that draws at 60fps will draw each frame in approximately 16.67 milliseconds. During those 16ms, an LCD display will update all of the pixels on screen from top to bottom. We will call the moving line across which the pixels are updated the Raster Line. In this slow-motion test video, you can see video frames alternating between full black and full white, updated from top to bottom. The screens are mounted in portrait orientation, which is why the updates happen right to left:

Many of the screens seem to be updating together, but some do not. This is because the system does not include dedicated hardware to ensure that the signals are in sync, so many of the displays begin their updates at different time. The system is frame-synchronized, meaning that all displays begin a raster sync within one raster-pass of each other. It just isn’t raster-synchronized.

If the displays were indeed raster-synchronized, we might represent their signals like so:

Perfect1f.gif

In an otherwise black video, Display 1 flashes a single frame (Frame 1 / F1) of white, followed by a flash of white on Display 2 in Frame 2. In slow-motion, we would see the raster line, represented here as diagonal lines, move across bother displays in sync, like two Tangueros walking together across a dance floor. The red line represents the particular pass of the raster line at which both displays transition from Frame 1 to Frame 2. In all of these illustrations, the red line indicates a pass of the raster line, or a switch between frames, that in a frame-synchronized system would occur at exactly the same time.

It is important to keep in mind that in this system, video is provided at 30fps, so each frame of video is essentially drawn twice in two consecutive passes of the raster line. This cannot be seen on screen as no change occurs in any of the pixels, but we can see it in our illustration as the diagonal line separating F1a and F1b, the two rendered passes of video Frame 1.

In our system, of course, the raster lines are not synchronized between displays. So even when we intend for both displays to flash white at the same time, it is possible that one display might begin a raster pass at the very end of another’s pass of the same frame:

Imperfect1f.gif

We can certainly hope that the raster lines of any two average displays are nearly synchronized, but in observance of Murphey’s Law, we must always assume the worst case, as indicated above in Displays 1 and 2. Here, we can see that although the frames might be in sync, with all rasters commencing within 17ms of each other, we will expect to see the raster on Display 2 commence just at the end of that of Display 1. This kind of behavior can be seen in our test video on the third and fourth columns, with the raster seeming to pass smoothly from one display to the other. In truth, any case in which two rasters start more than 16.67ms apart from each other demonstrates an imperfect frame sync, but for simplicity we will just say that the worst case in one in which they commence 17ms apart, as illustrated above in Displays 1 and 2.

So, what might this look like in a case where we see flashes in two consecutive video frames on two separate displays in a worst-case scenario?

Imperfect2f.gif

In the case of Displays 1 and 4, we have a point in time, Time X, at which both displays are entirely black. In the case of Displays 2 and 3, at Time X we see both displays entirely white. We still consider them to be in sync, and we should not consider these anomalies to indicate a failure of the synchronization mechanism.

There is a minor issue in the test video that does bear mentioning. The flashes move through four rows of white in each column. There are extremely brief moments during which you might notice a touch of white that is visible in the first and third row. This should be obvious after repeated viewing. I leave it as an exercise for the reader to demonstrate why, even in a worst case scenario, this should not be observed if the frame synchronization mechanism is working properly. (<sarcasm>Yeah, right.</sarcasm>) So why am I not concerned? Because it is the nature of LCD display pixels to take some time to switch from full white to full black. Typical response times for the particular displays in this system are specified as 9ms, which means that after the raster line passes and updates a pixel from black to white, it may take an average of 9ms (more than half a raster pass) for that pixel to fully change. I say “Average” because white-to-black and black-to-white transitions are usually slightly different, and the spec will mention only the average of the two. If the response time was an ideal zero ms, our raster line would be crisp and clear in our slow-motion capture, but in reality it is not. The raster line is blurry because after it passes, the pixels take time to change between black and white. We can expect that some pixels might remain white for a brief time after we expect them to go dark, resulting in this subtle observable discrepancy.

What does all this mean? It means that in a slow-motion test video of 24 synchronized displays, we observe nothing to suggest that the synchronization mechanism isn’t performing as well as we could hope for. To the viewer, the synchronization is true, and we deem the project a success.

Pixel Perfect Programming

In any project with multiple display devices that need to work together, an important part of the setup process is aligning and calibrating the displays to create a single canvas.  The best way to ensure that the virtual display space is coherent is to use display grids that highlight intersection points while adjusting physical display alignment and video signal positioning.  Grids are not only necessary while calibrating the final image, but are an invaluable template for talking about how to create media that will take advantage of a display surface, rather than be obstructed by it.  Good templates are important and need to be developed for an exact canvas.

Not too long ago, this usually meant going to Photoshop or Paint.NET and drawing.  First I would need to create a new document that is the total size of the canvas, taking into account the display pixels as well as any virtual pixels for mullions or gaps between the adjacent displays.  I wanted the grids to be precise, with each grid line exactly one pixel wide, so I would zoom all the way in to draw one pixel accurately.  Then I count down 96 pixels and draw another line, but it’s really hard to count to 96 on screen, especially when I have to scroll because everything is zoomed so big:

This takes a really long time, but I finally made a complex alignment grid with different colored layers for circles and diagonals, and straight grids.

A finished projection alignment grid

Then another project came along and I had to do it all over.  I tried scaling an old grid to match the new resolution, but then everything starts to alias and blur and you can’t actually tell where the precise lines are anymore.

I thought that there must be an easier way.  Without too much trouble I was able to leverage Windows Presentation Foundation to do the job.  I parametrized the important attributes of the grid and created Properties that I could set according to the project:

DisplaySizeX = 1920;
DisplaySizeY = 1080;
DisplayCountX = 4;
DisplayCountY = 6;
GridSpacingWidth = 128;
GridSpacingHeight = 128;
MullionX = 24;
MullionY = 120;
GenerateDiagonals = true;
GenerateCircles = true;

Using a Canvas inside a Viewbox, I was able to work with the native dimensions of my virtual canvas, but display an onscreen preview, like a thumbnail.  In XAML, this is all you need:

<Viewbox>
<Canvas Name="RenderCanvas" Background="Black" />
</Viewbox>

With the parameters set, WPF’s Canvas layout container allowed me to draw to screen with absolute coordinates using primitives like Line, Rectangle, and Ellipse (everything you need for a grid).  Here is a snippet of the loop used to populate the virtual canvas with display devices represented by rectangles:

for(int x = 0; x < DisplayCountX; x++) {
for(int y = 0; y < DisplayCountY; y++) {
Rectangle newRect = new Rectangle()
{
Width = DisplaySizeX,
Height = DisplaySizeY
};
newRect.SetValue(Canvas.LeftProperty,
(double)(x * DisplaySizeX) + (x * MullionX));
newRect.SetValue(Canvas.TopProperty,
(double)(y * DisplaySizeY) + (y * MullionY));
newRect.Stroke = new SolidColorBrush(Colors.Wheat);
newRect.Fill = new SolidColorBrush(Colors.DarkGray);
RenderCanvas.Children.Add(newRect);
}
}

Ultimately, I added the logic to draw all of the alignment elements I needed, including horizontal and vertical grid lines, circles, diagonals and some text to uniquely identify each screen in the array. It looks like this:

This isn’t as detailed as many alignment grids, but it is straight-forward to add more elements and colors using the same process.  The important thing is that all the time you spend making a useful template is reusable, because it isn’t locked to a specific dimension.

Finally, this is what makes the whole process worthwhile: In WPF, anything you can draw to screen, you can also render to an image file. It’s like taking a programmatic screenshot, but with much more control. Cribbing some code from Rick Strahl’s blog, I added a method to save a finished png file to disk:

private void SaveToBitmap(FrameworkElement surface, string filename) {
Transform xform = surface.LayoutTransform;
surface.LayoutTransform = null;
int width = (int)surface.Width;
int height = (int)surface.Height;
Size sSize = new Size(width, height);
surface.Measure(sSize);
surface.Arrange(new Rect(sSize));
RenderTargetBitmap renderBitmap = new RenderTargetBitmap(width, height,
96, 96, PixelFormats.Pbgra32);
renderBitmap.Render(surface);
using(FileStream fStream = new FileStream(filename, FileMode.Create)) {
PngBitmapEncoder pngEncoder = new PngBitmapEncoder();
pngEncoder.Frames.Add(BitmapFrame.Create(renderBitmap));
pngEncoder.Save(fStream);
}
surface.LayoutTransform = xform;
}

As a programmer, this saved me a lot of time. Instead of working with graphic design tools, I used the tools I was familiar with and took advantage of WPF’s support for media and imaging. I have attached a complete sample project of how this first part works.

But this is just the beginning of the things this is useful for: Using Data Binding, I exposed each parameter as an input field, so the user can interactively build a grid image by adjusting the values and see the thumbnail preview update live. Furthermore, instead of generating a single static image, this same process can be used to produce test video content by updating the RenderCanvas and saving a series of images as a frame sequence. This ends up being much easier than going to a timeline based non-linear editing station to create very simple graphics with a timecode display.

Video Synchronization Testing

For a recent project, 3byte developed custom graphics software for a 36-screen video wall. This required some kind of synchronization mechanism with which to keep the various screens in sync.  There are dedicated hardware devices like the nVidia G-Sync card that make this sort of thing really simple. However, this project involved driving four video display from each of our graphics workstations, and we ran into trouble with these cards during initial testing. Instead, we developed our own sync mechanism that runs over the local network.

To test our synchronization, we loaded up a video file that would make the quality of the sync really obvious. Breaking the video vertically into four quarters, on each frame of the video we flash a white rectangle in one of the quarter regions. Like so:

Really, it works better with some good PsyTrance. But what is actually happening? Whatever it is, it’s happening too fast to determine with the naked eye. So I picked up a Casio Exilim EX-FS10 and shot the system at 1000fps:

Far more interesting. The resolution of the video is not great, but it shows us what we need to know.

First of all, you will notice the scrolling of the video from right to left. The screens are actually mounted in portrait orientation, to the motion that you see is actual a top-to-bottom scroll as far as the display is concerned. These LCD displays refresh the pixels on screen 60 times every second, but they don’t all update at the same time. The display updates the pixels line by line from top to bottom, each line updating left to right. This makes sense when we consider that analog and digital video signals order pixel data in this way, just like reading a page of a book. At regular speed, a screen refresh seems instantaneous, but at high speed, we can see the way the screen transitions from black to white one line at a time. This scrolling line across which the pixels are updated we might call the Raster, or the Raster Position.

In this system, the timing of each display’s Raster Line is determined by the graphics card in the computer. Whenever the card says “start drawing the next frame… NOW!” the monitor locks in with that signal and starts the Raster Line at the top of the screen. Had we G-Sync cards for this system, we could tell the graphics cards to chant their “NOW” mantras in unison, and in slow motion we would see the Raster Lines of all the displays being drawn in perfect synchronization. As you can see in the video above, this is not the case for our system, where the lines are updated on different displays at slightly different times. This difference between displays is so subtle that it is never noticed by a viewer. The question is, are the correct frames being drawn on each pass of the Raster?

This system supports source video playback at 30fps, but the displays update at 60fps. Each source video frame is doubled on screen, so a single frame of white flash in our source video will be drawn white in two consecutive passes of the Raster Line on the display. In the slow-motion video, we see the raster line update each screen, then a pause while the subsequent Raster pass draws another frame of white (no change) before moving on the to next source video frame.

If you look at the third and fourth columns of displays, you will see that the Raster seems almost to move straight across from the fourth to the third as it updates the two columns together. Of course this is only an illusion, as the Raster does not sync between frames. What we are actually seeing is one display in column three that seems to be lagging behind column four by almost a full 17ms Raster pass. (I say 17ms because that is just about the amount of time it takes to refresh a display at 60fps.) This is not ideal, but in a system with no dedicated sync hardware, it is not surprising, and not a deal-breaker. It means that at 30fps, these screens are within a half-frame of perfect sync, which is nearly undetectable to the eye.

In Part II, I provide an analysis of the best possible sync performance for a network-synchronized video system. I  explain why the 17ms discrepancy in the video above falls within the tolerances of this system. We are quite pleased with the performance of our synchronization mechanism, and believe that it rivals or surpasses that of other industry-standard network-sync video systems. Next chance I get, I’ll run a similar test on a Dataton Watchout system and let you all know how it goes. Stay tuned.

Software Code Names

What the hell are they for? besides sounding cool, our buddy Dave Sims said he read somewhere that they are supposed to obscure the actual purpose of the product. In a small shop like ours I don’t think obfuscation is a huge priority though..

In anycase, I went a little overboard a few months ago. We have an ongoing project which consists of a couple of different modules, so I decided to call it all “project sealife” and each component would be a different sea animal. We have:

  • Octopus
  • Seahorse
  • Clam
  • Guppy
  • Lobster
  • Dogfish (the watchdog app, I thought this was particularly clever)
  • Goldfish
  • Starfish

the only problem? no one remembers the name of each other’s module. sigh, big fail.

–olaaf

Computers Hiding as Solid State Devices

Prejudice. That is what it’s about. There is an old argument in the Pro AV industry about not using computers as video playback devices, or control systems, or anything else you can imagine that is mission critical.

I remember a few years ago on a certain project in Texas where the AV guys (I was the software consultant) absolutely refused to consider using an off the shelf Dell computer and a custom video playback app to run video to the screens. The options given to us were an Alcorn DVM-HD or a GVG Turbo. I mentioned to them that both of these systems were actually embedded PC’s with a custom PCI-E video output card (one ran XP pro embedded and the other some sort of Linux distro, or maybe even DOS- who knows). However, this didn’t seem to matter. it was all about how the non-computer felt and looked in the racks (and, they even threw the old argument- there isn’t enough rackspace). Come on, really? Maybe they didn’t trust my custom software solution, but Josh and I were building the master show control system running on multiple PC’s and servers, so I don’t see the logic.

Anyways, that project had a truckload of problems concerning video playback. My argument was that it’s the idiosyncrasies that end up dictating how these systems are run, and by holistically controlling one of the more important parts of the project (the video playback engine) we controlled the idiosyncrasies. After the system finally ended up stabilizing by buying twice as many devices (because the dual channel capability actually didn’t work too well in practice) the main issue was that the 74GB HDD size was way too small, but it was the biggest WD raptor drive available. oh well.

So, back to mission critical stuff running on PC’s. Here is a story about BAE systems installing Win XP & 2k as a mission critical command and control system in the Royal Navy’s Trafalgar and Vanguard class nuclear subs:

BAE Win XP and Win2k on subs

How about that? If we dial it back to the level of AV systems, consider personal mission critical stuff. Like your communications device. I bought an iPhone last week. It’s basically a mini computer. it’s now loaded with over 40 applications, and who knows who wrote these things? I’m not afraid this thing is going to “crash” if I dial 911, why would you be afraid if a video screen somewhere glitches for an hour or two? It’s obviously different if you are in a theatre situation, but that is what backup hardware, and, ahem, professional quality code and professional project management, is for.

PS, there is a whole slew of pictures on the internet of random massive LED screens in times square and other places showing windows error screens that stay up for DAYS. That’s just pathetic and reeks of poor planning. Why would you drop 1+ mil on that thing and not have a plan to service & monitor it?

here are some fun pics

appcrash1.jpg
BSOD11.jpg

What are we doing?

so, if anyone other than a page crawler is reading this you may be wondering why olaaf, josh, and chris decided to setup a blog?

Well, chris and olaaf went to a business conference a month ago (http://businessofsoftware.org/) and came back with some new ideas of how to stand out in the vast sea of interactive video programmers and AV system designers. We could stand on the street corner and hand out business cards, or we could start a blog and hope someone will read it and be intrigued by our boring lives.

–olaaf