Technology

Hello World Open – A Story in the Visuals: Lights, camera, action!

October 30, 2014

Read time 5 min

Overlays—such as live rankings, hovering car numbers above the cars, and lap counters—were done with HTML elements. The pins on top of the cars were also implemented using HTML elements. This involved transforming the car position to screen-space coordinates and updating the pin location each frame. All the effects in overlays were implemented using CSS transitions – no 3D magic here.

image06

The overlays were iterated during the development and they evolved based on the feedback. The fast lap-times made it difficult to come up with a clean and easy way to follow the on-screen visuals. The main goal was always to make the race as easy to follow as possible. For example, the decision to scrap the fully-functioning mini map was a difficult one. However, removing it reduced the clutter on screen and improved the viewing experience since there were fewer moving things on screen to focus on. Similar iterations were done for most of the overlay elements, but to make the experience complete, we needed one more thing, audio.

Screen Shot 2014-06-03 at 16.58.01

Early screenshot showing one of the tested camera angles and initial overlays

In addition to all the visual aspects of the project, we had included audio support to make spectating the races more enjoyable. We received an amazing chiptunes soundtrack and soundscape from our very own Tuomas Nikkinen in the very beginning. Unfortunately, no-one outside of our company has ever heard these tunes, since they were all scrapped to make room for an even more awesome soundtrack based on the Hello World Open 2012, the Finnish Coding Championships, theme.

You can listen to the complete Hello World Open 2014 soundtrack on Soundcloud.

At both Reaktor’s Code Camp and in the Hello World Open qualifications, all audio was played in the browser, along with the visualisation. During these events, we noticed that the browsers’ audio playback would suffer from occasional glitches. Since the finals were played live with us having control over all audiovisual software, we wanted to find a more stable way to play the audio. So rather than trust the browser for a perfect execution of Web Audio and WebGL – both experimental features at the time – we decided to extract all audio playback from the browser into a separate application we called the “audio server”.

The audio server worked in a manner similar to the visualisation. It connected to the race server using a websocket and then parsed JSON stream to events and game state. It then played sounds corresponding the events and ambient noise corresponding the state of the race. The server was wrapped into a supervisor script for potential crashes or disconnects. Potentially soundscape-flooding events, such as car bumps, were throttled, so that the listening experience would stay sensible if split-second multiple collisions would occur.

We ended up using Go for audio server implementation. It had support for websockets and although the platform itself doesn’t have audio support, it has existing bindings to the native cross-platform SDL (Simple DirectMedia Layer) library. The audio server was run on a separate laptop, rather than the visualisation computer to make sure that in a worst case scenario neither system could hog all resources. As the audio server was responsible for playing the sound effects and ambience of the visualisation, a dedicated DJ handled the music, adjusting to the live stage and the event itself.

Once we had the audio sorted out, we were ready to go to production.

Our production setup consisted of store-bought ~1000€ gaming PC with a nVidia 770 GTX graphics card, Windows 8 and newest stable release of Google Chrome. This became our golden standard for running the visualisation at 60 FPS.

Much of the development was done on Mac and Linux platforms. For production we opted to use Windows since we assumed the driver stability and performance to be the best on that platform.

We also supplied personnel for the finals in the form of race directors. The race directors were responsible for starting the race when everyone was connected, controlling the cameras and making sure the visualisation worked as expected. Luckily the automatic camera director worked fine most of the time, so manual override was rarely needed.

The display was mirrored to two outputs: the first output went to a local screen, where the race director was able to switch cameras. The second output was connected to the image mixer, where the video feed director controlled the output to both the on-site screens and the stream.

The resolution for production was 1080p, and the visualisation ran smoothly on it. The Javascript garbage collector caused some small lag-spikes, but those were fortunately invisible on the screen or stream. Based on the Chrome FPS meter, the fps occasionally dropped briefly down to a still-acceptable 40 FPS, but most of the time we enjoyed a solid 60 FPS video feed.

We were worried about driver and browser stability, so we restarted the browser after each race. We noticed a worrying trend of frame drops towards the end of the tournament, but we were able to run all races without any major technical glitches. The only thing we weren’t able to fix in time was the lag spike at the very beginning of the race. That, however, doesn’t mean that our project was completely problem-free.

Next up: the retrospective.

Other parts of Hello World Open – A Story in the Visuals series

  1. The History
  2. New Dimensions
  3. Art and Assets
  4. Lights, Camera, Action!
  5. In Retrospective

 

“Hello World Open – A Story in the Visuals” is the story of the game that was built for the Coding World Championships. The series consists of five parts and it was written by members of the HWO technical team: Harri Salokorpi, Niklas von Hertzen, Teijo Laine and Tuomas Hakkarainen. The text was proofread by Eero Säynätkari. 

Never miss a post