Robot Swarm

We've created a video that explains Robot Swarm, the interactive robotics exhibit Three Byte produced for the National Museum of Mathematics. It was a challenging project in many ways, and we are very proud of what we have created. 

Presenting Robots

After our paper describing a new collision avoidance scheme got accepted at the beginning of the year, we got the opportunity to introduce it in person at ICRA 2015 in Seattle. It was a lighting round with only 3 minutes to introduce some concepts that took a long time to develop, but the audience was ready for it.

If you are really interested, you can read the full paper here. What was particularly cool was to meet the authors of much of the work that our new algorithm was built on.

Chris Keitel and Sam Engel from Three Byte with Stephen Guy and Jur van der Berg (the ORCA guys!)

Chris Keitel and Sam Engel from Three Byte with Stephen Guy and Jur van der Berg (the ORCA guys!)

IRCA 2015 Acceptance!

We have submitted a paper on the work of multi-robot collision avoidance to the International Conference on Robotics and Automation, and we've been accepted for publication in the Proceedings of 2015 IEEE IRCA!  We'll post the final version of the paper once it is submitted, but the gist is that we developed an enhancement to the ORCA algorithm which helps robots collaboratively figure out how to not run into each other, particularly when they are trying to cross paths or maneuver in tight areas near walls.

The video below shows the robots trying to swap positions and drive to the point opposite them on the circle.  This obviously has the potential for a traffic jam because the shortest path for every robot is straight through the middle of the circle.  Both ORCA and our enhanced algorithm (called EVO) solve the problem so the robots don't ever run into each other, but EVO gets them there faster and in a path that looks more natural.  In the paper, we quantify this with lots of trials, but in the video you can see that the robots don't ever have to turn around and seem to gracefully get where they want to go.  This contributes to making the robots seem more intelligent and more life-like.

Robot Swarm

We're really excited that MoMath's Robot Swarm project is about to open to the public.  We've been working on this project for MoMath for over two years, and it's finally ready to go live on Sunday!  We got a really good write up in the New York Times and Engadget got some good photos. The Verge also conducted a nice interview with Glen Whitney.

Big thanks to the executive team at MoMath for the idea and the initiative as well as to New Project, the exhibit fabricators who have been totally amazing and did a great job building the giant aquarium for the robots to play in.


Software Development Fractals

Despite differences in their tools and skill-sets, designers and developers work on similar problems. Designing classes, interfaces and other software components is a similar exercise to UI design because each component needs to be intuitive to use while being fully featured and configurable. Code, like a UI, needs to be written with the consumer in mind. Decisions about naming and architecture and what to log and what to test, what tools to write and how to handle configuration, etc. are questions of user experience. In other words, programming is not unlike like having to design and name hundreds and thousands of text driven UIs and great names are not always obvious. For example, Chris and I took a full few minutes to decide on a name for the Tuning Chamber's abstraction-layer-interface which mediates input from hallway sensors and lighting output (we decided on “IMotionManager”). Components are nested and bifurcate in a seemingly infinite tree of encapsulation and dependencies.

The “scale-invariant” nature of software is the reason we can talk about software architecture or a programming language as either “high” or “low” level. Writing software is about grappling with and taming the complexity that emerges from various layers of abstraction and embedded functionality. The evolution of a software project is not unlike the coastline paradox in the way new levels of complexity emerge as a perspective changes. This emergent complexity contributes to the problem of projecting how long it will take to complete a software project or feature. 

The coast of Britain seen at very different scales on Google Maps:

The H-Tree Fractal is used in the VSLI process for laying out transistors on integrated circuits as an area efficient embedding of binary trees (Wolfram Demonstrations Project):

The fractal nature of software architecture explored through Visual Studio's built-in dependency-graph tool - a small section of the Tuning Chamber Project:

Amichai

The Tuning Chamber UI

For some part of the past three weeks, Three Byte has been working on a “Tuning Chamber.” The software is a control system responsible for monitoring, visualizing and routing between a wide range of input sensors and visual effects including columns of light animations, a projector, fog-machine and a hallway full of speakers that play at different volumes and fade between sources and respond to the position of people in the space.  

From a software perspective this project posed many unique challenges including how to assimilate information from a wide range of potentially noisy sensor events to infer the position of the people or peoples responsible for triggering those events. Additionally, this project has been interesting from a UI design and user-experience perspective. For our software to be an effective tool for our clients it must allow them to:

  1. Visualize and immediately understand the state of the system and the connectivity of its many "moving" parts
  2. Configure and visualize the configuration of animations and outputs to a variety of channels from a library of animations, assets and audio streams
  3. Easily configure a potentially complex mapping between sensors and output channels
  4. Automatically test and visualize the responsiveness of individual sensor and the resultant output effects on the UI
  5. Be useful whether or not the software is connected to all its physical hardware

Also, interaction with the software needs to be fluid, but intelligent enough to protect against mistakes and information loss. The software should make user interaction intuitive and easy to understand while also exposing a fairly complex feature set. This software would need to expose differentiated user experiences for different users and use cases including monitoring, configuring and testing.

In this blog post, Jeff Atwood addresses the hazard of letting software developers build user interfaces and illustrates his point with examples. Atwood's post highlights the importance of investing time and resources in planning an intuitive and good-looking UI is. Dealing with the user's experience with our software up-front is an essential part of the development process at Three Byte and helps us to:

  1. Reflect and document the needs and priorities of our client
  2. Highlight the components, work-flows and themes that deserve the most focus
  3. Help inform the evolution of our software's architecture and object model
  4. Augment the development process by serving as a workbench for gaining visibility into key processes and exposing tools that assert the code's behavior and "wiring"

The Tuning Chamber UI was designed to expose the state of the Tuning Chamber system and all its constituent input/output components and also exposes power tooling capabilities for testing the software when hardware dependencies are missing. The Tuning Chamber UI allows a user to simulate a sensor event with mouse click or hover operations and see the audio-visual output behavior of the system on screen. These tools complemented by our unit tests, were an invaluable asset to the development of the functionality we were targeting.

The Tuning Chamber UI

The Tuning Chamber UI

Amichai

Project Janus – delivering time-synchronized picture collages to your iPhone since 2013

 

Many of history’s most important software projects were designed around a specific problem that needed solving or a specific process that can be improved on. The application and problem domain for project Janus however is not so well defined. More like an experiment or research project, Project Janus was built around a technology goal and a question: what level of synchronization and coordination can be achieved between an arbitrary number of mobile devices?  In the Janus app, users can join a “session” where the camera or flash of all phones can by be controlled by a trigger on one or many of the individual phones and we work to achieve a high level of timing resolution and accuracy.

We thought about some of the potential applications for collaborative and synchronized picture taking.  If several people at a party can take a picture at precisely the same time from different angles, then maybe we could reconstruct a more comprehensive view of the moment.  By utilizing image stitching or collage-making tools, we figured the resulting imagery might be much more than the sum of the parts.  Or, in a nod to The Matrix, what if a group for friends could put together a quick bullet-time effect in the park for free using just their iPhones?  At least this was exciting to think about.  With Janus’ mobile device synchronization model in place, we hope to discover and facilitate the development of new applications and APIs that leverage this capability.

The roots of Project Janus lie in company conversations held over company lunches in the Three Byte conference where we discussed ideas for an in-house project to work on. Over time, some consensus regarding the character of this project began to emerge – we wanted this in-house project to be:

  1. Small – something that could be iterated on and demonstrated in a period of weeks.
  2. Challenging – it’s not fun otherwise.
  3. A learning experience (for us) – as a software developer, stepping outside of my comfort zone and learning radically new technologies is one of the most challenging parts of my job. Learning a new programming language, not unlike learning a spoken language, is a disorienting experience but is essential to the software development profession and one of the most fulfilling parts in the long run. The team at Three Byte saw this in-house project as an opportunity to expand the scope of our skills and to try strange and new technologies.

Among other ideas, we considered a developer facing tool for visualizing log data or facilitating code review, a CMS designed specifically for museums, a public transportation route optimizer, and an app to communicate and visualize data to a user about his or her daily commute.  The idea that was finally agreed upon is an iPhone synchronization app for giving one iPhone control over many. Credit for this idea goes to Olaaf who pitched it to the office on a quiet afternoon in September.

Working at a small custom software development company means that most of my time is spent writing and fixing software that belongs to and benefits our clients. Having the time and energy to work on a fun and low-pressure in-house project is an amazing luxury made possible by a meticulously planned development process.

At the heart of the Janus project is the Janus synchronization model: web-sockets via SignalR, plus an ongoing monitoring of network latency to enable device specific latency compensation. Synchronization is tested by pointing all of Three Byte’s iPhones (connected to the internet in various ways) at a high resolution stop watch and comparing the resultant Janus app images – results are getting better and look promising. Captured photos are uploaded to a Three Byte server where a collage maker process takes all the uploaded images and builds a collage. 

Our first Janus collage:

Janus1.png

Synchronization testing: 

Janus2.png

 

The work I did on the iPhone app part of the project meant abandoning years of acquired fluency for development in Visual Studio and the .NET stack and having to learn iOS development with Objective-C and XCode. Programming in this environment felt like waking up in the wild west – none of my keyboard shortcuts worked anymore, instincts that used to quickly reorder or refactor my code now teleport me to a different document entirely and I’m too busy learning a new type system to be able to worry about unit testing. The process has gotten better, but I look forward to getting back to C#.

In a little more than one week of work, our team has built a prototype version of the app and made great strides in solving the synchronization problem. After spending an hour field testing the app in Madison Square Park we have a new list of features to implement, network connections to profile and bugs to tackle. Project Janus is being put on hold for now, but expect to see great things in the future!

Amichai

The Mathenaeum – A story about testing and use cases

The importance of unit testing to the software development process is by now well established. Advantages include: a) demonstrating the functionality of code units, b) highlighting any unwanted side-effects caused by new changes, c) a B. F. Skinner-esque positive feedback system reflecting the progress and success of one’s development work. Most importantly perhaps, developing code that fails to perform as desired gives visibility into each successive point of failure and serves to motivate the development process. In general, you can’t fix the bugs that you can’t see and the importance of baking QA into the development workflow cannot be overstated. Unit testing, regression testing and continuous integration are an essential part of the software development process at Three Byte.

The Mathenaeum exhibit, built for the Museum of Mathematics that opened this past December, is a highly optimized, multi-threaded piece of 3D graphics software written in C++ with OpenGL and the Cinder framework. The algorithms employed for manipulating complex objects across a wide range of geometric manipulations and across multiple threads were challenging, but for me the most challenging and edifying part of this project was the problem of hardware integration and effective testing. More specifically, working on the Mathenaeum taught me about the difficulties associated with and the creativity required for effective testing.

Unlike some software deadlines, MoMath was going to open to the general public on December 15 whether we were ready for it or not. At Three Byte we were balancing the pressure of getting our product ready to deliver and the knowledge that long nights and stressful bouts of overtime can introduce more bugs than they fix. Just before opening day functionality on the Mathenaeum was complete. And we delivered…and the museum opened…and things looked fine…but every so often it would freeze. The freezes were infrequent and most visitors had a successful experience and show control software that we wrote made it trivial for the MoMath floor staff to restart a frozen exhibit from a smart-phone, but even an infrequent crash means a frustrated user and a failed exhibit experience which was devastating to me.

Visitors at work in the Mathenaeum

Effectively testing the Mathenaeum was a challenge. The first issue I solved was a slow leak of openGL display lists that weren’t being disposed of properly. This leak was aggravated by a bug in the communications protocol we had setup with a set of five LCD screens embedded in the Mathenaeum control deck. To set the screen state for the arduinos we were creating and opening Windows Socket 2 objects (SOCKET) but failing to close them. This meant we were leaking object handles and causing memory fragmentation causing the leaking Mathenaeum to crash after using only 100 MB of memory.

Visual Leak Detector for C++ was helpful in finding leaks, but in the end tracking the correlation between memory consumption in the task manager and various operations was sufficient for localizing all memory leaks. Despite plugging up all memory leaks the sporadic crash/freeze persisted and no matter what I tried and I could not reproduce the bug on my development machine. Visibility into this issue was basically zero.

Everyone knows that a developer cannot be an effective tester of his or her own software. Therefore, when trying to reproduce the Mathenaeum crash I would try to inhabit the psyche of a person who never saw this software before and is feeling their way around for the first time. Everyone at Three Byte tried to reproduce this bug but to no avail. So, I started spending time at MoMath observing the interactions that happened there. Lots of adults and kids took the time to build stunning creations in 3D and took the care to stylize every vertex and face with artistic precision. Some people were motivated by the novelty of the physical interface, the excitement in experimenting with the various geometric manipulations, and others seemed motivated by a desire to create a stunning piece of visual art to share with the world on a digital gallery. In addition, the most popular creations were printed by a nearby 3D printed and put on display for all to see. I saw a mother stand by in awe as her eleven year-old son learned to navigate the software and spent hours building an amazing creation. Watching people engaged in my exhibit inspired me in a way I never felt before and made me extremely proud to be a software developer.

However, I also saw a second type of interaction which was equally interesting. MoMath hosts a lot of school trips and it’s not uncommon for the museum floor to be “overrun” by hundreds of girls and boys under the age of eight. For these kids, the Mathenaeum is an amazingly dynamic contraption. The trackball (an undrilled bowling ball) can be made to spin at great speeds, the gearshift is a noise maker when banged from side to side and throttle generates exciting visual feedback when jammed in both directions. For this particular use case the Mathenaeum is being used to its fullest when two kids are spinning the trackball as fast as possible while two others work the gear shift and throttle with breakneck force. It soon became clear to me that the Mathenaeum was failing because it was never tested against this second use case.

The first step in stress testing the Mathenaeum, was making sure that my development machine used the same threading context as the production machines. Concretely, the Mathenaeum explicitly spawns four distinct threads: a) a render-loop thread, b) a trackball polling thread, c) an input polling thread, d) a local visitor/RFID tag polling thread. The physical interface on my development machine, being different from the trackball, gearshift and throttle on the deployment machines, was using only one thread for trackball and input polling (both emulated with the mouse). Replicating the deployment environment meant enforcing a threading context which was consistent in both places. In retrospect, this change was obvious and easy to implement, but I hadn’t yet realized the importance of automated stress testing.

My observations at the museum inspired the construction of a new module called fakePoll() which would be responsible for injecting method calls into the two input polling threads as fast as my 3.20 GHz Inter Xeon processor will allow. This overload of redundant calls, (similar perhaps to a team of second graders) works both input threads simultaneously, while causing all types of operations (and combinations thereof) and navigating the Mathenaeum state machine graph at great speeds. In short, fakePoll() made it possible to easily test every corner of Matheaneaum functionality and all the locks and mutexes and race conditions that could be achieved. Unsurprisingly, I was now able to crash the Mathenaeum in a fraction of a second – a veritable triumph!

Given a failing test I had new visibility into the points of failure and I started uncovering threading problem after threading problem. Numerous deadlocks, inconsistent states, rendering routines that weren’t thread safe, and more. With every fix, I was able to prolong the load test – first to two fractions of a second, then to a few seconds, then to a minute then a few minutes. Seeing all the threading mistakes I had missed was a little disheartening but an important learning experience. Injecting other operations into other threads such as an idle timeout to the attract screen and various visitor identification conditions exposed further bugs.

memoryCorruption.jpg

In a single threaded environment a heap corruption bug can be difficult to fix, however by peppering your code with: _ASSERTE(_CrtCheckMemory()); it’s possible to do a binary search over your source code and home in on the fault. In a multithreaded application solving this problem is like finding a needle in a haystack.

After spending hours poring over the most meticulous and painstaking logs I ever produced I finally found an unsafe state transition in the StylizeEdges::handleButton() method. This bug – the least reproducible and most elusive of all solved Mathenaeum bugs, exposed a weakness in the basic architectural choice on which the whole Mathenaeum was built.

The state machine pattern is characterized by a collection of a states, each deriving from a single base class, where each state is uniquely responsible for determining a) how to handle user input in that state, b) what states can be reached next, c) what to show on screen. The state machine design pattern is great because it enforces an architecture built on components which are modular and connected in an extensible network. In the state machine architecture, no individual component is aware of a global topology of states and states can be added or removed without any side-effects or cascade of changes. In the Mathenaeum, the specific set of operations and manipulations that a user can implement with the gearshift, button and throttle, depends on where that person stands within the network of available state machine states.

When a user navigates to the stylizeEdges state in the state machine, they are able to set the diameter of their selected edges and then change the color of these edges. After setting the color of the edges, we navigate them to the main menu state with the call:

_machine->setState(new MainMenuState(_machine));

The setState() method is responsible for deleting the current state and replacing it with a newly created state. At some point, I realized that if the user sets all selected edges to have diameter zero, effectively making these edges invisible, it doesn’t make sense to let the user set the color of these edges. Therefore, before letting the user set the edge color I added a check to see if the edges under inspection had any diameter. If the edges had no diameter, the user would be taken directly to the main menu state without being prompted to set an edge color.

This change set introduced a catastrophic bug. Now, the _machine->setState() could delete the stylizeEdges state before having exited the handleButton method(). In other words, the stylizeEdges state commits premature suicide (by deleting itself) resulting in memory corruption and an eventual crash. To fix the bug, I just had to insure that the handleButton() method would complete as soon as the _machine->setState() method was called.

Now my load test wasn’t failing and I was able to watch colors and shapes spinning and morphing on screen at incredible speeds for a full hour. I triumphantly pushed my changes to the exhibit on site and announced to the office: “the Mathenaeum software is now perfect.” Of course it wasn’t. After about five hours of load testing the Mathenaeum still crashes and I have my eye out for the cause, but I don’t think this bug will reproduce on site anytime soon so it’s low priority.

Some Mathenaeum creations:

Amichai

Low Latency Syncronization over the internet

Previously, ActiveDeck was able to stay in sync within about 4 seconds between the PowerPoint computer and it’s neighboring iPads. However, this wasn’t good enough for us- our background is in show control systems and frame accurate video playback systems. We have a curse of over-analyzing every video playback system we see for raster tear, frame skips and sync problems. Given that ActiveDeck solely relies on the internet, we thought 4 seconds was pretty good, considering. But we wanted to make it much, much better.

We thought about the best way to improve the latency, and our initial thinking leaned towards a local network broadcast originating from the computer running PowerPoint, but that introduces issues on WiFi networks, especially those you would find in hotel ballrooms. VPN to the cloud service would be another option, but adds lots of complexity.

We ended up using pure HTTPS communications (no sockets, no VPN, no broadcasts) to and from the cloud servers with the use of some clever coding. If the iPad has internet connectivity, it will be in sync.

Check the video out, this is over a cable modem internet connection and a plan Linksys WRT54G access point. Our Windows Azure servers are at least 13 router hops from our office. The beautiful part is that the sync messages are tiny and this will scale to hundreds of iPads.

Kinect: Cheap Key

The 3Byte R&D lab recently purchased a Microsoft Kinect to play with. We didn’t mind the fact that we don’t have an XBox to plug it into because Code Laboratories has published an SDK which allows you to use C# (and several other high-level languages) to access the camera feed. In fact, the test app that they distribute is very cool for immediately figuring out why this device is different than a normal web cam:

In addition to providing a normal color camera video stream (with red, green, and blue pixels), it also provides another dimension (literally) of depth information in a separate parallel stream. The picture above is me sitting at my desk, and the depth feed has been colorized to give a rough indication of where different objects are in the frame.

So, how do we do something useful with our new toy?

One thing that we immediately decided to try is Kinect Keying. The concept is similar to chroma keying but instead of requiring a solid blue or green colored background, we use the depth information from the Kinect to extract only the elements at a certain physical depth. I tackled this problem in a proof-of-concept project using WPF.

The important transformations happen in two steps:

    First I create a mask by capturing the depth frame from the camera and choose a specific depth value to isolate (plus or minus a margin of error). For every pixel in the depth frame, if it is within the desired depth slice, I keep it; if it is closer or farther away, then I set that pixel to 0 so that we ignore it.
    Second, I combine the new depth mask with the normal incoming video signal, and if the pixel from the depth mask is greater than 0, keep the video pixel; otherwise, set the video pixel alpha to 0.0 so that it is totally transparent.

Combine this with a background image, and we can send Mr. Gingerbread man on a trip to the desert:

The upper left-hand corner is the normal video feed of G-Bread standing on his desk. To the right is the grayscale version of the simultaneous depth feed from the camera. Anything in black is either too close or too far away for the camera to perceive it, but that is ok, because we care about a particular section of the mid-field here.

On the bottom left is the depth mask I created by specifying a specific depth slice. The sliders at the bottom of the screen allow you to easily adjust the desired depth and the tolerance (how much depth) to slice.

Finally, on the lower right is the composited image with a static background. As you can see, this a bit primitive because the incoming depth signal is somewhat noisy and it isn’t perfectly registered with the video image (there are two cameras in slightly different positions). But this demonstrates that a cheap keying effect is possible without specialized hardware or sets.

The source code as a Visual Studio project is available here: KinectDepthSample

With thanks to Code Laboratories for their great SDK and managed libraries, and to Greg Schechter for his series of articles on leveraging GPU acceleration through pixel shaders in a managed environment.

100 iPads

Have you ever wondered what 106 iPads look like when packed as densely as possible? Here is a picture:

For a recent project, we developed a synchronized iPad display app. The project was to support a presentation with some new method of interacting with the participants. The designers liked the idea of handing out ipads to which they could “push” content they wanted, when they wanted.

So, we fired up xCode and built the iPad app. The application is made up of several modes of operation while the main mode is to display content driven by the presenter, so that on cue all of the iPads display new screens without any interaction by the person holding the iPad. This looks pretty awesome when it gets triggered and you can see all of the iPads change their screens at once.

Other modes are sort of like tests or drills, where the users complete a quiz and then submit that data to the presenter. We have another application there that creates graphs based on the statistics from all of the iPad users to which the presenter can speak to when projected on a large screen in the center of the room.

When we set out to design the system, we had to think through the potential bottlenecks. Our main concern was network latency, so after some research we specified the best wireless access points we could find- Ruckus Networks. See these links:

http://www.ruckuswireless.com/

http://www.tomshardware.com/reviews/beamforming-wifi-ruckus,2390.html.

We ended up with 5 access points and a network controller on a gigabit network. Worked great (a little bumpy the first day of the presentation to due to a faulty access point).

Next, we created a back end system where the content would be stored locally, yet able to be updated during the presentation. Using IIS, we posted the images and XML files on a local (to the event network) webservice. We then wrote a multi-threaded socket server on another computer that was dedicated to triggering page turns, mode changes, and initiating fresh content downloads to the iPad.

Here is a video during some initial sync testing, this is all running from one access point, and triggered by Chris and his PC.

Network Shutdown

Computers in AV Systems

All of the AV systems I’ve worked on recently include at least one computer. Because Windows computers are so general-purpose and typically inexpensive, they can be used for interactive touchscreen kiosks, video playback, audio playback, or many other useful functions.

However, one thing computers don’t do well, is listen to control systems (at least out of the box).
The most essential function of an AV control system is to turn everything on at the start of the day, and turn everything off at the end of the day. Not only does this protect the equipment (especially monitors and projectors), but it is also the green thing to do. Everyone is paying more attention to reducing power consumption, particularly when the system is not even being used. So we want to be able to turn non-essential computers on and off, too. And, it turns out this is not as easy as it should be. This post describes the ways that I have developed to handle this problem gracefully.

Startup

This is the easier of the two problems. Most modern computers include include a BIOS setting that allows you to prevent completely turning the off the power to the ethernet adapter, and the network adapter will respond to a Wake-On-LAN magic packet over the network. Even when the computer is turned off, you can power it up, by sending out a special command that includes the MAC address of the computer. I have written a Crestron module, and a C# library to perform this function, and you are free to use them and see how it works.

Some older computers do not have network adapters or power supplies that support Wake-On-LAN. In this case, you can punt and set the BIOS to turn the computer on a specific time of day. Even if you don’t know exactly when it needs to be on, you can still reduce the computer’s duty cycle by judiciously setting a daily startup time.

Shutdown

This is actually the difficult part. I couldn’t find any way to tell a Windows computer to shutdown on command. You would think this should be easy, but computers are designed to protect the user form the outside world by default, so they don’t let anybody tell them what to do.

So, I wrote a small C# console application that runs in the background and listens on a well-defined network port to incoming messages. When it gets a “SHUTDOWN\x0D\x0A” message (with CR and LF appended), it issues the shutdown command to the operating system. This could work on any operating system, but I’ve implemented it for Windows and the critical line of code looks like this:

System.Diagnostics.Process.Start("shutdown", "/s /f /t 3 /c \"Control System Triggered Shutdown\" /d p:0:0");

The compiled application is called NetworkShutdown. Unzip it and put a copy on the computer you want to control, and add a shortcut to it to the Startup folder. You also need to make sure that UDP port 16009 is open in the Windows Firewall.

Then, any control system that can send UDP packets can be used to control this computer. For example, using Crestron just send an ASCII string like this:

Conclusion

At the end of the day, it doesn’t take that much more programming work to ensure that computers can be turned on and off with your media system, and you can save a lot of energy in the process. When being green is easy, why not?

Passwords in an Enterprise (or small business)

As the IT admin and part owner of a software startup, I’ve had to manage multiple servers and services. The domain controller is SBS2008, a WatchGuard firewall setup to use RADIUS to authenticate VPN against active directory, a subversion repository, a MySQL DB backing a redmine installation, multiple MS SQL Server DB’s to back various development projects with SA accounts, an internal FTP, a slew of other local services, all in addition to the default local admin logons, bank account logon’s, QuickBooks, Amazon, go daddy, insurance websites, etc. The list goes on for about 40 discreet user accounts and passwords. Then, I have my personal passwords to deal with, like my iTunes account, Gmail, etrade, quicken, bank account, facebook, linked in, home computer, etc.

In addition, for each of our consulting project installations, we have a slew of new passwords and user names for various computers and systems.

I had a simple system in 2000 or 2001- use the same password! Of course, this isn’t very secure, and it never quite worked- each system or service had a slightly different password policy.

So, I started using a password management system. Meaning of course, an excel doc with all of my passwords.

Then in 2009 I took classes for an MCITP program (the windows server admin certification program, they changed the name from MCSE for some reason), and one of the lecturers was a security expert who spoke about how nearly everyone uses “1” or “!” as their number or special character in a “complex” password. I was taken aback, because sure as hell I was doing that. He spoke about the need to use a password management system and to use passphrases. Also, he said it’s also okay to write down your passphrase on a post-it note and put it in your wallet.

So, about passphrases and the wallet thing first. The reason passphrases were better than a simple password is that they are long, yet simple to remember. The lecturer spoke about how windows XP and server 2003 used an LMhash, which broke your password into two uppercase zero-padded 7 character halves. So, it was super easy to crack with a brute force or time memory trade off algorithm- for example, this free application can crack LMhash passwords in a snap http://ophcrack.sourceforge.net/ . I cracked my home computer logon account with this software and freaked out about how easy it was. There is even a paid CUDA accelerated version, for those with big nVidia cards.

The deal with a passphrase was that it is typically longer than 14 characters and really hard to brute force unless you are the NSA. Imagine your password is “don’t forget the Ajax”. That is 22 characters, really fast to type, and really hard to crack. In addition, if you wrote it down on a post-it note and kept it in your wallet, there is a good chance the guy who stole it thinks it’s a shopping reminder, not a password.

However, you can’t remember so many different passphrases for so many different sites (I won’t even talk about how bad it is to use the same password for every account). Here is where a password management system comes into play.

For Three Byte, I set up Password State; it’s free for 10 users or less, and totally awesome. It allows you to authenticate against Active Directory to access the password site, and from there you can access passwords and user names for your other services. It requires SQL Server and IIS. This system used 256 bit AES encryption in the database, and some local .NET methods to further obscure the password. It allows you to share your passwords with other users, and it logs each time the password was copy-clipped or viewed. It allows you to set time limits on the passwords, so you can keep them fresh. Just what the CIA needs, I think. I’ll use it for all my major AV installations and recommend its use to anyone who needs this kind of system.

FYI, this screenshot is copied from the clickstudios.com site, so no hacker can see the user accounts we actually use!

My buddy Geoff who is a pilot in the Air Force tells me about super stringent password requirements, such that many people create a new password by simply hitting the characters on the keyboard from left to right (starting at 1), up to down (ending at z),  alternating with the shift key to generate the password. If only they were taught why and how to secure passwords.

For more resources on secure passwords, just google it.

Optimal Video Playback in Managed Desktop Application

Recently I had to build a desktop application that allows users to watch video files. After becoming familiar with Microsoft’s Windows Presentation Foundation, this seemed to be really easy: .NET Framework 3.5 provides a MediaElement control which you place in your UI and then assign a source file to play. It is simple to build your own transport controls right into the UI that match your design and there are lots of examples of how to do this on MSDN.

As I guessed, this was easy to wire up and my first pass looked like this:

WPF Example Using MediaElement

Here is a complete Visual Studio 2008 project (MediaElementPlayer) that you can build to see how it works.

Great.  Game Over, right?

Well, we watched it for a while, and found that the media player was dropping frames.  The video files would playback perfectly in Windows Media Player, but when run through this WPF app, it just didn’t work as well.  In many applications, maybe this is not a big deal, but we always want to achieve the best possible playback quality.  I tried several different attempts to optimize the application: attempting to force a specific rendering framerate, stripping down the application to just the MediaElement so that no other compositing layers or animations would tax the processor.  I tried playing lower bitrate media files, but nothing worked.  The playback still consistently dropped frames and stuttered.

I commissioned a special video file that is designed to make dropped frames and stuttering really noticeable. It features a vertical line which scrolls back and forth slowly – motion should always be perfectly smooth. You can download this video and try it with the project above.

Perhaps you can see the same behavior on your system if you build the project above.  Despite its ease of use, the MediaElement control does not allow a lot of flexibility in terms of tweaking its performance or finding out metrics on how well it is actually playing.

So I tried a different approach.  Based on some research from Jeremiah Morrill it seemed like we could use the older ActiveX Windows Media Player component to programmatically playback video inside our application. I found a tutorial and fairly quickly added the ActiveX control to playback the video files. This required building a separate forms library in order to automatically expose the necessary components as references, but it worked. This implementation seemed to harness the native Windows Media rendering pipeline and the playback performance was exactly the same as playing the video in Windows Media Player. The video was perfectly smooth again. But that wasn’t the complete solution.

The catch is that the older Windows Media wrapper was designed for Windows Forms and is only supported inside a Windows Forms Host control. This is a major problem because Windows Forms hosts can not be layered in a WPF UI layout the same way other controls can. They always show up on top, and everything else is obscured behind it. This is ultimately because of a fundamental difference between how WPF is rendered and the legacy windowing system. In this case, it meant that there was no way to add custom transport controls on top of the video. We could use the default Windows Media Player skin and transport controls, but then it gives away the secret and makes the app look thrown together. It would be much better to have the play and stop buttons match the look and feel of the rest of the application.

The final solution involved creating the transport controls in a separate transparent window and layering that on top of the video player, programmatically repositioning it automatically to create the illusion that they are part of the video player.

<Window x:Class="ActiveXMediaPlayer.TransportControlWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Background="Transparent" WindowStyle="None" AllowsTransparency="True"
ShowInTaskbar="False" ResizeMode="NoResize" SnapsToDevicePixels="True" Topmost="True">

This ends up requiring listening to all of the re-size and layout events from the main window and responding appropriately. It took a lot of attention to corner cases when the video player goes full screen and when the primary window loses focus, but ultimately this solution achieves the necessary effect.

You can download the improved Visual Studio project (ActiveXMediaPlayer), and compare the quality for yourself.

Many thanks to Jeremiah Morrill for his very in-depth blog that covers all aspects of video and rendering in Windows.

Stack Exchange

About a year ago or so, I discovered this site for helping with software development problems:

www.stackoverflow.com

It’s a completely free, community powered site for asking and answering questions related to software development. It was made for experts, by experts. It turned out to be an amazing problem solving resource, and shortly thereafter www.serverfault.com and www.superuser.com were opened up, and now have a huge user base. I encourage you all to take a look at the quality of questions and answers.

The founders have decided to open up to the internet community and ask for ideas to start up new sites. There are ideas for sites from mythology to raw food to industrial control systems.

I’ve put a proposal out there for a site to cater to the community of AV professionals. The concept being this is where you ask the tough questions, and help out people with tough problems. I need people to sign up and back the proposal, as well as to ask sample questions to see if the quality meets the par. This thing needs a critical mass to make it to the next stage..

A sample question could be:

When installing a BSS Soundweb in a rack, can they be stacked with no spacing? Has anyone every had heat related problems?

Or, another sample question could be:

What is a good resource for figuring out how to send wake-on-lan to various computers from an AMX controller?

A BAD question could be:

What does the blinking red light on an AMX frame mean?

So, please go to:

http://area51.stackexchange.com/proposals/8341/audio-video-control-systems

log in and post sample questions..

thanks

Green AV?

I recently attended infocomm 2010. One topic of discussion was “Green AV”. It was pretty prevalent. Some manufacturers had amp meters attached to their gear with digital readouts so you could see in real time the amount of power consumption. I’ve even been noticing LEED accreditations on the email signatures of AV professionals.

Really? Green AV?

For years we’ve been integrating wake-on-lan, and sleep-on-lan procedures in our AV systems to minimize power consumption. I wonder if that qualifies us…

Green AV is a tough concept for me to get because i feel the best thing to do often is “turn the damn thing off”. Though I suppose that would be against my interest as one who makes his living from designing AV systems.

On nearly every project, I work up heat/power loads to determine how much electricity we’ll need as well as how much air conditioning required to cool the system. A few years ago out of curiosity I started to convert the power loads to their equivalent in oil (it was easy to find the conversion, though in the US I suppose most system are ultimately powered by coal).

There are about 5,800,000 BTU’s in a single barrel of oil. A barrel of oil is 42 gallons. Assuming perfect efficiency in the generation process….. You can take a 50” plasma screen and estimates that it consumes about 500 watts and further assume in a typical system runs for 12 hours per day. Total consumption for the day is 6 kilowatt hours. Multiply that by 3.412 to get the BTU equivalent, and we find that running this screen for the day consumes .353 barrels/oil.

Say your digital signage network has 20 screens, and you run them 12 hours per day, 365 days a year. The consumption is about 26 barrels or 1092 gallons of oil, not counting the computers to run it and the air conditioning to cool it.

I wonder how often the message is worth it? What exactly is Green AV?

–olaaf